Category Archives: English

Representative Line: The Deadly Cookie

Over the years, Armid transitioned from being a full-time developer to a full-time pen tester (as in penetration testing, not pen testing) and he hasn't looked back since. "I did enjoy writing code," he commented, "but there's something really satisfying about demonstrating an XSRF attack to that smug developer who swore up-and-down that his code was perfect." And with things like PCI Compliance to worry about, there are plenty of projects to keep him busy.

"It takes a lot to surprise me anymore," Armid added. "In fact, these days, I'm surprised if I don’t find a SQL Injection vulnerability. That being said, the public-facing operations engine of a large (3,000+ employee) company really surprised me. To say that it was filled with back doors would almost imply that someone thought to install doors — this system has more openings than walls. But there was one vulnerability in particular that trumped them all."

system("chmod 777 " . $_COOKIE["$sessionid"]);

"In fairness, this was one of the more secure lines of code, since most attackers will only mangle their cookies as their fourth… maybe fifth step. Plus, they'd be so distracted by all of the other vulnerabilities that they'd likely overlook this all together."

A way to take out spammers? 3 banks process 95% of spam transactions



If you want to stop spam then going after the banks and payment processors that enable their lucrative trade may be your best bet, according to research performed by a team from the University of California-San Diego, the University of California-Berkeley, and the Budapest University of Technology and Economics. After examining millions of spam e-mails and spam Web sites—and making over 100 purchases from the sites advertised by the spammers—the research team found that just three banks were used to clear more than 95 percent of spam funds.

Read the rest of this article...

Read the comments on this post

Pests Are Developing Resistance to Monsanto’s Engineered Supercorn

The Adult Stage of the Western Corn Rootworm USDA

Some consumers may have a problem with genetically modified food crops, but in at least one case described in an Iowa State University researcher’s paper there’s one customer that’s happy to consume Monsanto’s GM corn: rootworms, the very pest the corn is modified to thwart. According to the paper, western corn rootworms in at least four northeast Iowa corn fields have developed a resistance to the natural pesticide in corn seed produced by Monsanto, marking the first time a major Midwest pest has developed a resistance to GM crops.

That could spell all kinds of trouble for food crops, farmers, Monsanto, and pretty much everyone who isn’t a western corn rootworm. Though based on isolated cases thus far, the problem could be more widespread, and the paper is bound to rouse another debate on the benefits and demerits of GM crop cultivation and current farm management practices.

The big problem here would be, of course, the widespread proliferation of rootworm resistance. Monsanto first dropped their rootworm-resistant corn seeds on the market in 2003 at a time when its herbicide-resistant modifications had made Monsanto’s seed extremely attractive to farmers, who could blanket their fields in herbicide and kill everything but their food crop plants. The corn seed also contains a gene that produces a crystalline protein called Cry3Bb1, which delivers an unpleasant demise to the rootworm (via digestive tract destruction) but otherwise is harmless to other creatures (we think).

The seed was so successful that it’s estimated that roughly a third of U.S. corn now carries the gene. Which means one-third of U.S. corn could potentially be susceptible to rootworm again if the resistance that has reared its head in Iowa is indicative of a larger problem.

The good news is that the same rootworms that are resistant to Monsanto’s special sauce are susceptible to a competitor’s similar-but-different GM toxin. But if rootworms can develop a resistance to one strain of GM toxin, it stands to reason that–if farming practices remain unchanged–that it could eventually become resistant to others.

[WSJ]

Advanced logging on Linux

It seems everywhere I go to work, I face the same operational problems. Once again, I must find a way to centralize logs and provide different levels of access to said logs. Sadly, the syslog protocol is getting quite aged, and it’s just not enough anymore. It works well if you have only a few machines, and only need to provide access to sysadmins. But when developers and other types of users are thrown into the mix, you need a more granular system.

Also, support for the syslog protocol varies greatly from daemons to daemons. One major culprit for me as always been Apache (and web servers in general) because it only supports out of the box syslog for error logs. For access logs, you can use different techniques, but no matter which one you use, you end end with the same problem: if you have more than one vhost on the machine, all their logs end up in the same syslog facility. You can obviously filter them after that, but that’s way more work than say email logs.

If you work for a company that has a lot of budget, you may consider getting Splunk. It’s a very good commercial product with a free version also. But last I checked, it was priced at 7000$US per half gig of logs indexed per day. When you have web servers generating several gigs of logs each per day, that could end up being very expensive. Money that could be used to buy hardware to deploy more open source software. Which I tend to prefer.

So after a week or so of investigation, testing and benchmarking, here are my findings. The architecture of the final setup is not settled yet, but it will more or less look like this.

NOTE: Keep in mind, this post won’t go into details about clustering and scaling. But all chosen products can achieve that. You should be able to come up with that on your own easily. I will concentrate on the tools and the workflow.

1. Systems

In order to avoid changing the syslog daemon on each servers, I decide to keep either sysklogd or rsyslog intact. As it’s installed by default. And with a centralized configuration management system, it’s simple to add a new entry to send your logs to another machine. Something like this:

*.*                  @somehost.domain.tld

That will take care of most of the systems’ logs and send them to a central machine. But what about Apache? As far as error logs are concerned, it’s simple. You need to reconfigure Apache to send them to syslog. On RedHat-based systems, you want to edit /etc/httpd/conf/httpd.conf and on Debian-based systems (I might be wrong, I don’t have one handy with Apache installed) you’d have to modify /etc/apache2/apache2.conf or something like that.

ErrorLog syslog:local2

I use local2 as an example, but you could pick any facility. In any case, I recommend using one of the local* facilities, as later on, it will allow you to create filters and alerts based on that.

Now as I was mentioning earlier, the issue with web servers is that you can have more than one vhost on a machine. And syslog was never designed with that in mind. So you will inevitably end up with all logs for a machine in the same facility. Apache supports pipping logs to an external program. I found that the simplest was to use the logger tool. So either system-wide or per vhost, you can add something similar to your configuration:

CustomLog "|/bin/logger -p local2.info" combined

During benchmarks, that obviously added some overhead. You will have to plan your site’s capacity with that in mind. But by not much. Maybe 5% more. I think it’s well worth it for the operational advantages.

As far as Java applications are concerned, you can easily configure it to send to syslog using log4j.

2. Tailing log files (warning)

In the past, I sometimes would use tools that would tail log files and then forward them to a central syslog machine. That’s fine if you just need to archive on a file system somewhere. But to use there’s one thing that’s important to know, what you get in a log file, is just a string. That’s not a standard and properly formatted syslog message. Rsyslog is able to log real syslog messages to your files, but I don’t recommend it as it’s harder to read. That said, if you want the webUI at the end of the proposed chain to work properly, you don’t want to do that. It’s better to use something like logger if possible.

3. Logstash

Now we leave the legacy world and enter the present time of logging. Logstash is the Swiss Army knife of the logging world. It’s a very well designed application that can be used in either agent or server mode. In agent mode, you can configure different types of inputs and outputs. It supports a wide range of protocols like file, syslog, amqp, etc.

So here, I decided to use it with a syslog input to receive logs from our machines. It’s also easily load-balancable with a layer 3 load-balancer. It will then send the logs to RabbitMQ to an exchange.

4. RabbitMQ

Now, this part is optional, you could send the logs straight from Logstash to Graylog2, but I prefer to have a middleman to do a bit of queuing. Also, once the messages enter the exchange in a AMQP server, you can route them to more than one queue. For different types of processing. In order to do that though, you need to use a fanout exchange.

Why RabbitMQ? Well, it’s written in Erlang and it’s very fast. During my benchmarks, it was processing between 4000 and 5000 messages per second during the peaks. Also, it’s easily clusterable in an elastic kind of way. And all operations can be done while the cluster is live. I also recommend you install the management plugins, as that will provide a very nice webUI to manage your stack. Often, UIs of the sort are limited, but in this case, everything that can be done on the CLI is doable with the webUI as well. And it’s very well designed and pleasant to the eye.

So at this point, my messages are entering an exchange named ‘syslog’ that routes messages to two different queues: graylog and elasticsearch.

NOTE: As I write this, version 2.5.1 as just been released. At this point in time, queues can not be replicated in your cluster. So if you lose the node where the queue was created, you lose the queue. That said, you can query your queue from any node in the cluster. Support for replicated queues should be available soon. You could use DRDB though to cluster only one node. That would give you high-availability at the queue level.

5. Logstash (again)

Now, we’re almost ready to give access to our logs to our different users. At this steps, logs are ready to be sent to Graylog2. So we will use another Logstash instance with an AMQP input, that will read messages from our ‘graylog’ queue and forward them to Graylog2 using a Gelf output. That’s the preferred protocol for importing messages into Graylog2. I won’t provide an example configuration for Logstash, as it’s really easy and straightforward to configure.

6. Graylog2

This is where all the magic will happen. Graylog2 has two components: a daemon that receives logs, processes them, and inserts them in a capped collection in MongoDB. Now, take a few minutes to go read about that. As it’s important to understand well. Basically, it works like a FIFO. So in order to take advantage of the speed inherent to a FIFO, you want to make sure that your capped collection fits in RAM. MongoDB will allow you to create a capped collection larger than that, but you get major performance degradation when your data reach the amount of available RAM you have.

So with that taken into consideration, on my test machine, I created a roughly 5GB capped collection. With that, I was able to store more than 10 millions messages in it. It’s important to know, that Graylog2 is not meant to be used for archiving. Where it excels is in real-time (or close to it) view of your logs. Also, you can setup different alarms based on facilities, hosts and regex. That will then email you to alert you. Very cool. It allows you to be more proactive. And to detect issues a traditional monitoring system can’t find.

7. Elasticsearch

Remember I mentioned two different queues? The reason is simple. Once a message is consumed in RabbitMQ, it’s not available anymore. It’s deleted. So you need more than one queue, if you want to use different systems. As Graylog2 is great for short-term analysis and real-time debugging, you can’t count on it for archiving. Enters Elasticsearch: a clusterable full-text indexer/search engine. It’s based on the Lucene project from the Apache Foundation. Their main goal is to have a very simple to use and configure, elastic search engine. And from my short tests with it, it lives up to it. It discovers new nodes using multicast. So basically, you power up a new node, and the cluster detects it, recalibrate itself and voila.

That’s where I plan to store my logs for long-time archiving. Logstash (is there anything it can’t do?), when run in server mode, provides a web interface to search them. You would use again an AMQP input and a Elasticsearch output to send them to Elasticsearch. Then run another instance of Logstash in web mode. To provide the webUI.

So that’s it. That’s a home-made Splunk-like system. Obviously, it’s more work to deploy, but it’s much cheaper, more flexible and open source. It will grow as needed by your infrastructure. You can use it to aggregate logs from servers, applications and networking equipment easily. And provided granular access to your logs through graylog.

Why You Should Be Blogging

Post image for Why You Should Be Blogging

As a network engineer, systems administrator, or just a general IT guy (or girl), you probably don’t think of yourself as a “brand”. You probably just think of yourself as an average person who happens to have a fairly marketable skill.

Each of us, however, are our own unique “brand” and blogging can help you increase and fine-tune your skills. In this article, I’ll tell you why you should be blogging and give you some great ideas for getting started.

Are You Studying for a Certification?

If, like many of the folks who read Evil Routers, you’re preparing for a certification exam, you can kill two birds with one stone. You can use your blog to help teach others (like I’ve started doing with Free CCNA Labs) while solidifying your own knowledge at the same time.

As an example, back when I was studying for the CCNP certification, I wrote a lot of “how to” type articles covering things like EIGRP authentication and AS path prepending.

I’d write up the configurations by hand in a text editor, slowly work through the configuration steps, then verify everything was working properly. Once I had done that, I made detailed notes and wiped the configs. Then, I’d start over from scratch and do the whole thing again while writing up my blog posts. When I make videos, I often work through the configuration at least three times.

I’m a firm believer that you learn better by “doing” and this method required me to work through each configuration at least twice — sometimes more, if I screwed up along the way. Repetition is key.

After working through the configurations a few times, you’ll find that you easily remember the commands and necessary steps. At the same time, you’ll be coming up with great content for your blog that will help others out as well.

If you’re studying for the CCIE, you’re probably devoting most of your free time to those studies. In that case, you probably don’t think you have the spare time to keep up with a blog. On the contrary, however, writing my blog articles was one of the best ways I spent my time when studying. Try it for a month and see.

You’ll Research Better, More Often

When writing an article for your blog, you’ll be inclined to spend extra time researching the topic. It’s bad enough when somebody else makes you look bad, but nobody wants to make themselves looks like an idiot.

While doing your research, you’ll often discover new options or features that you weren’t aware of before. If you’re like me, you’re probably on a never-ending quest to learn as much as you can, so make a note of those new things. You can come back to them later and write blog posts about them.

Connect With Others

When you start your networking blog, you’ll quickly find others who are “in the same boat” and writing related articles. Blogging is a great way to connect with others who share the same interests.

If you’re studying for the CCIE, for example, you’ll quickly discover many other folks who are writing about the exam, how they are preparing, and what they’re having trouble with. You can learn from each other.

When I started Evil Routers, my goal was to simply have a place I could put my “notes” in order to refer back to later. It quickly grew and I started meeting more and more people who were in the same situation as me (studying for certifications). I “met” (virtually) many others who are much smarter than me and was able to learn from them — and bug them when I had problems!

Last fall, I was lucky enough to be invited to HP Tech Day and Net Field Day and got to meet many of them in person. I’ve even (somehow) managed to get invited back to Net Field Day 2 (which is happening next week, by the way). I’ll probably be the “least smartest” in the room, but that’s fine with me — I’ll have plenty to learn and some awesome people to learn it from!

None of that would have happened if I hadn’t simply decided one day, “Hey, I think I’ll start a blog!”.

Blogging Keeps You Sharp

Like I mentioned earlier, I seem to be on a never-ending quest to learn more. I could sit in front of a computer reading technical documents and RFCs 24 hours a day if I didn’t have to stop to eat, sleep, and, well, you know, every once in a while.

If you’re in the IT field, I don’t have to tell you that technology is constantly changing and evolving. There’s no way you’ll ever know everything, of course, but if you want to stay sharp, you’ll need to constantly be learning new things.

Blogging is a great way to do this. As new technologies are developed and new products announced, you’ll want to learn about them. By doing your own research and writing about it, you’ll not only be helping yourself but others as well (see above). You’ll also establish yourself as someone knowledgeable about the topic.

Alright, You’ve Convinced Me. How Do I Get Started?

Have I convinced you yet that you should be blogging? Great!

Fortunately, the barriers to entry are quite low. In less than an hour, you can have your own blog up and running and have already written your first post. I’ll show you how.

The first thing you’ll need to do is find a host for your blog. Although you can set up a free WordPress blog, your web address will be something.wordpress.com. Because you’re going to be an awesome blogger, you’ll eventually want your own domain name (you’re developing your brand, remember?).

Or, you can do it the right way from the start. I recommend signing up with either Bluehost or Hostgator, both of which support “one-click” installs of WordPress to get you up and running in just a few minutes (if you don’t already have your own domain name, you can take care of that when signing up).

Neither are free, but they’ll only set you back a few dollars a month and they give you enormous flexibility when it comes to the design and layout of your blog. You can choose any of the thousands of WordPress themes that you want in order to customize it and use any plug-ins that you like to extend the functionality.

Your First Posts

When you’re just starting out, your first blog post should describe exactly who you are and why you are starting a blog: to document your certification progress, to rant about how much Cisco sucks, or to simply get all the women (chicks dig bloggers, you know).

This sets the focus for your blog and gives you a starting point for your future blog content. In addition, it also gives your blog a bit of a personal touch.

After you get a few posts written, you’ll probably find that hardly no one is reading them. With millions of blogs on the Internet, how do you draw readers to yours?

One great way to get new readers to your blog is to offer to write a guest post on another blog. If you’re into networking, for example, you can submit a guest post to be published on the Packet Pushers website. This will help bring your writings to a new audience.

The key to “guest blogging” is to find blogs with a similar target audience, identify the primary interests of that audience, and make sure your blog fits in.

Spark Discussion

Now that you’ve gotten a few readers, how do you keep them coming back?

Asking questions and sparking discussion is one excellent way. Another is by raising controversial issues. You’ll get alternative opinions and viewpoints from others and this will also help people identify what you “stand for”.

In addition, be sure to make sure of social media such as Twitter. This allows your followers to keep up with your new articles and also gives you a medium for discussion and networking with other key people in the IT field.

Get To It!

Blogging lets people know who you are, what you do, and what you stand for. It’s also an excellent way to meet new people in your field. It’s easy and cheap to get started and you can blog as often or as little as you like.

Next Monday, I’ll tell you about one method to help ensure you meet your certification goals and how your blog can help you do it.

For the price of a latte or two, you can sign up with Hostgator or Bluehost and have your blog up and running in under an hour. Once you do, post a link in the comments below so that we can find it.

Related posts:

  1. Want a free shot at the SolarWinds Certified Professional exam?
  2. Microsoft Certified Professional
  3. HP ASE Certified