093 RR Security Exploits with Patrick McKenzie

by Charles Max Wood on February 20, 2013

Panel

Discussion

01:32 – Patrick McKenzie Introduction

02:03 – Security in Rails

09:12 – Why are there so many security issues right now?

  • White Hat and Black Hat Security Researchers

12:35 – Vulnerabilities and Exploits

  • Zero-Day Exploit
  • Patch Day

15:38 – Security Responses

28:00 – YAML

33:50 – Mindset of Hackers and Security Researchers

36:13 – Enabling features and disabling default features

  • Tweets from Peter Cooper 1, 2, 3
  • XML

50:46 – Safer coding practices

01:03:18 – Security Monitor by Code Climate

  • Discount code for Ruby Rogues listeners: RRSEC13
  • Includes early access to Security Monitor and 50% discount off your first three months.
  • Expires March 6th

Picks

Book Club

Patterns of Enterprise Application Architecture by Martin Fowler: Read along with us! We will be discussing the book with Martin himself and the episode will air on Wednesday, March 20th, 2013.

Next Week

Robust Ruby with Ara T. Howard

Transcript

JOSH:  You will be able to tell that it’s Avdi speaking because you’ll feel a warm glow starts to work around your belly and expand out through your body.

[Laughter]

[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]

[This podcast is sponsored by New Relic. To track and optimize your application performance, go to RubyRogues.com/NewRelic.]

[This episode is brought to you by WAZA, Heroku’s one day celebration of art and technique. Join Matz, Aaron Patterson, and more on February 28th in San Francisco. Use exclusive code Ruby-Rogues-13 for $50 off registration at WAZA.Heroku.com.]

CHUCK:  Hey everybody, and welcome to Episode 93 of the Ruby Rouges podcast. This week on our panel, we have James Edward Gray.

JAMES:  Do you guys realize that Top Gun was redone in 3D?

CHUCK:  We also have Josh Susser.

JOSH:  How do I follow that? Hi, from San Francisco

CHUCK:  David Brady.

DAVID:  I never write insecure code but my code is frequently jealous over dependent, constantly angry and exhibits low self confidence.

CHUCK:  Avdi Grimm.

AVDI:   James, you can be my wingman anytime.

[Laughter]

CHUCK:  I’m Charles Max Wood from DevChat.tv. And this week, we have a special guest and that is Patrick McKenzie.

PATRICK:  Hi to everybody, this is Patrick and I’m phoning in from Japan.

CHUCK:  Do you want to introduce yourself really quickly since you haven’t been on the show before?

PATRICK:  Oh, sure. My name is Patrick McKenzie. I’m perhaps better known as patio11 on the Internet largely on Hacker news because it’s my job. For the last six years or so, I’ve run a small softwares and service business, selling software over the Internet. My primary language is Ruby on Rails language/framework. And the reason I’m on the show today is to talk a little bit about the recent Ruby on Rails vulnerabilities because I have a bit of knowledge about them and I wrote a blog post about it.

CHUCK:  Awesome. Alright. So, there are security issues with Rails?

[Laughter]

JAMES:  Nope. Okay, we’re done. Bye everybody!

[Laughter]

PATRICK:  So, do you want the…? I’ve been listening under a rock for the last couple of weeks. Do you want to just get the 15 second explanation out of the way?

JOSH:   Yes, why are we all feeling insecure right now?

CHUCK:  So, for the person who wasn’t up until midnight last night reading the blog post that you told us to read, what’s going on?

PATRICK:   Sure. Since late December, there’s been a number of very, very critical issues found on Ruby on Rails. The largely stemmed to one root cause which is de-serializing YAML, which Rails uses a lot internally, is insecure. You’re probably familiar with the YAML from database.yml where you have your production and test and development settings for your databases. But it turns out that Rail uses the same format to de-serialize other things like JSON in some instances. And YAML is a very powerful language, it lets you de-serialize into arbitrary Ruby objects.

And in late December, it was discovered that de-serializing into arbitrary Ruby objects, depending on what version of Ruby you’re using, can cause arbitrary code to get executed. And people have been finding a variety of code paths that let them do that. The most serious one would have allowed anybody to write basically a 15-line Ruby script and remote compromise substantially every Ruby on Rails application on the Internet and run whatever code they wanted on that server, including like a system command to download the root kit and start playing with it. So, it was very bad.

Those have been patched. If you haven’t patched already, you should have patched three times in the last month and literally hang up right now and go patch.

CHUCK:  Okay, bye! I’ll be back in an hour.

PATRICK:  And if you have patched, then you’re going to have to patch a few times in the next coming weeks because we’ve only seen the first couple of acts of display so far.

DAVID:  I’ll just wait until they settle out.

JAMES:  Okay. So, there’s like a million things of what you just said that we should probably talk about a little bit. First of all, if you want to see how these exploits have basically been done, our good friend Tender Love has a really good article that just shows it very straightforward, it’s a super easy to read, and anyone can follow it. So you can kind of understand what’s been going on here. And then, another thing is that Patrick has written a very detailed article about ‘what does this mean to your start-up’ basically which is the reason we have him on our show and you should definitely read that because I see people complaining every day about things. And I’m like, “You haven’t read Patrick’s article yet, it’s obvious.” And then, just a third thing I thought about while you were talking is, Patrick mentioned that these exploits can be used to gain control of a server running an un-patched Rails and we should point out that the exploit has already been turned into a module for Metasploit, the Metasploit framework. So, that’s a penetration testing framework.

Now, it’s literally point and click to compromise a Rails server using these exploits and you could use computers and send out thousands of them very quickly. So basically, every Rails site on the Internet can be compromised with child’s play tools at this point.

PATRICK:  I think that’s very important to emphasize because a lot of people think, “My application isn’t particularly security critical.” But there’s a huge amount of money involved. Or, “I haven’t offended any hackers.” But there are literally people doing port scans of the entire Internet. And for every IP address and IP4 just hitting up port 80 and firing off four HTTP requests. And if your server just got added to the botnet, it was running on Rails. If you were running WordPress or something, then it’s just another 404 Error.

I would strongly suggest that you drop what you’re doing and patch immediately. Any application that you do not patch is going to get owned. Whether that’s a customer facing website or some internal tool, instance of Redmine that happens to be accessible outside your firewall, for some reason. If it is up there, it will get owned.

CHUCK:  So, when you say patch or update, what you’re saying is go in and update your version of Rails. So, you need to bundle update Rails?

PATRICK:  Right. You need to update to one of the fixed versions, they are listed in the security. The most current fixed version is in the most current security notice that’s been published. And the option, if your application is not in a state to do an immediate upgrade, there is a few things that you can drop in initializers that will kind of hammer one of these holes closed at a time. But if you have an application that you depend on that is not in a state where it could easily upgrade, I would make that your number one priority for February to get into the point where you can reliably upgrade when a new version gets released. Because while everything has been amendable to kind of easy monkey patches so far, that certainly is not guaranteed for some stuff coming down the pipes.

CHUCK:  So, is there a link to those lists so that people can go and look and see where they’re at on the security spectrum?

PATRICK:  Yep, I think we’re probably going to put a link to the Rails security disclosure list in the show notes probably and that is the sole source of truth about this.

CHUCK:  Yeah, I was just looking for that in the chat.

JOSH:  I have it right there. Unfortunately, it’s a Google group and the URL is for Google groups and not something that can be pronounced on the air.

JAMES:  Yeah, go for that.

JOSH:  It’s the Ruby on Rails-Security Google group and that can be searched for.

JAMES:  It’s a really good group too. Actually in preparation for this call, I went through it. There’s lots of resources out there including, I found at least one that you can pay for. I wouldn’t recommend it. This group is great. It’s basically all signal, no noise. They give you the security exploit, tell you what to do. That’s exactly what you need to know. So, that’s probably the best place to stay on top of them.

The Rails core team’s been really good about, as these are coming up, they’re releasing super minor patches for all the major versions of Rails. So then, you can go in there even if you’re not using the most current thing. You can go in there and upgrade that minor point release or the teeny point release to the patched version and all that’s in these patches are the fix. So, very low chance that you have some problem from the upgrade, basically.

CHUCK:  Awesome.

JAMES:  So Patrick, one of the questions that we got over Twitter to ask you is, “Why are there so many right now?” And you have talked about this a bit in your article. But let’s talk about it on the air. Why is all this happening?

PATRICK:  Sure. Well, the fundamental insight is that security bugs in anything tend to be discovered in groups largely because once you have one kind of vector, for one like underlying vulnerability, we just discovered that YAML parsing can be weaponized in the particular circumstance, then it’s much easier to discover other similar code paths that exercise the same underlying vulnerability but are kind of maintained separately from another.

Another is that the incentives for security researchers both white hats and black hats are kind of screwy. White hat’s security researcher and I guess that’s a hat I wear something like 5% of the time myself, are incentivized by a desire like academics to find stuff to be able to publish. And so, the notion that a particular framework or a particular language is at a state of vulnerability as recently, gets people to look at that. Because it’s like they will be able to find new vulnerabilities in that framework and thus be able to publish and get the credit/kudos/commercial recognition for that success.

Similarly, black hat researchers are largely incentivized by being able to compromise applications. And the notion that, “Hey, we could compromise every Rails application on the Internet instantly, or effectively and instantly if we have a botnet,” is pretty powerful if you are the kind of person who runs botnets for a living.

So yeah, I’ve heard a lot of people saying that it’s just because Rails/Ruby has a cruddy security record or a cruddy community or anything and I think that’s pretty far from the truth. You see this kind of pattern of events over and over again in pretty much all web stacks in all languages. You know if J2EE, the big freaking Enterprise Java framework, has a code execution and vulnerability which happens every year, or not every year, but happens every once in a while, you can expect that to be followed by other similar disclosures within a matter of a couple of weeks.

JAMES:  So just to give a concrete example there. The original exploit or one of the original exploits was that in Rails, you can send XML, you can send your parameters in XML but Rails also had a feature where it would allow you to embed YAML in the XML. And then as Patrick mentioned before, YAML being so flexible, it allowed this exploit. And again, I refer to the Tenderlove article if you want to see how to do that. So that was patched.

And then like Patrick said, what they do immediately after that is they try to find another way to do the same thing. Well, it turns out that Rails also allowed you send JSON, and embed YAML inside the JSON. So then, bypassing the XML route, they could go into the JSON and basically do the same thing. So, that’s why they’re looking at this angle.

DAVID:  It’s kind of like blood in the water, right? Once there’s some in the water, then the sharks start squirming at it.

JOSH:  Patrick, I have a couple of words that I would like you to define, since we’ve started using them. We’ve been talking about vulnerabilities and exploits. Can you talk about like what’s a vulnerability versus what’s an exploit?

PATRICK:  Sure. A vulnerability exists like it is a fact of nature given that we are using code or language that is written in a certain way, there is something, a mis-feature or a bug in that code language et cetera, that enables someone to do something that we don’t expect and would allow them to do bad things to us.

An exploit is actual working code which exercises that vulnerability. So for example, the Metasploit framework or even just a privately maintained Ruby script, that actually you can point to the server and own it, would be a working exploit. You can have vulnerabilities without being exploited if either nobody knows about the vulnerability which is true of these six weeks ago. Or, you can have a vulnerability which you can look at it and say, “This code is almost certainly vulnerable but we haven’t successfully weaponized an exploit against it.” Meaning, we haven’t figured out the right combination of inputs to give the code to exercise that vulnerability.

JOSH:  And what about like a zero day exploit?

PATRICK:  I don’t love the term zero day but be that as it may, it’s one that’s used in the industry a lot. A zero day exploit means that you are hit with an exploit on patch day. Meaning, you had essentially no warning about it. Why don’t I like this? Because it suggests to people that patch day is the earliest warning you’re going to get which is not necessarily true or that you will always get warning in advance which is also not necessarily true.

JOSH:  What do you mean by patch day?

PATRICK:  The day Rails releases publicly, we had X vulnerability discovered and we have produced a patch for it. It’s patch day. And it would have been an extraordinarily bad idea to wait even one day on some of these vulnerabilities because again, you could remote compromise servers over the Internet and compromise everything in the IP4 space within that day.

So, it’s very important that you apply the patch on patch day. It was within the realm of possibility that someone could drop the zero day exploit, meaning that they would actually successfully backtrack from the patch or the vulnerability disclosure to working exploit code and then, actually use that for evil means on the same day as the patch dropped.

However, it’s also possible for people — these vulnerabilities have been found by white hat researchers. But it’s possible for the black hats to have the vulnerability working before patch day. So you could potentially get hit by a negative two day exploit and then well, sucks for you, right? So yeah, I don’t love the discourse of zero day vulnerabilities.

For example, a lot of people would describe what happened to RubyGems as a zero day vulnerability in that they were exploited on the same day they learned about that exploit being possible. But I don’t know if that necessarily advances the conversation. We should probably talk about what happened to RubyGems too because it’s slightly different than the Rails issue but is kind of related and is of massive importance to every Rubyist.

JAMES:  Yeah, let’s definitely do talk about that. That was a big thing and I think that there was a tweet from Nick Quaranto when he basically said, “We’re putting RubyGems in maintenance mode until we can assess what’s happened here.” And I think that’s was how it all started. So yeah, tell us what happened there.

PATRICK:  Okay. So recently, as a result of this research in Ruby on Rails community, we’ve learned that de-serializing YAML is a very, very dangerous activity. And so, some folks who are probably not white hats started looking at all the other applications other than the Ruby on Rails core framework that would accept YAML from outside sources. One of those applications was the RubyGems website/framework which was open source. So, they had a pretty easy time detecting, “Okay, if we give YAML in the Gem description file, then RubyGems will actually parse that YAML to be able to do their operations on the backend and that allows us to own the RubyGems server.”

So, they uploaded a specially coded Gem called appropriately enough, ‘Exploit’, which basically caused the RubyGems to execute arbitrary code by taking a bunch of the root gems server credentials and posting them to a place where the attacker could then review them at their leisure. The fact of that exploit was disclosed to some people within the security community and some of them immediately got on the horn with the RubyGems maintainers which caused RubyGems to put the RubyGems service into maintenance mode, meaning they wouldn’t accept any other new RubyGems. And for a while, they weren’t actually distributing RubyGems at all. And then, they had to verify that all the Gems that were in their backend repository were not compromised.

The ultimate nightmare scenario would have been not somebody doing that kind of like just grab RubyGems credentials and laugh at them a little bit on the Internet framework, but to successfully exploit the RubyGems framework and use that to put back doors or root kits into commonly used Gems. And then, anybody who types bundle update, or Gem install anything for a period of weeks or months before it’s discovered, would have their machines rooted. That’s the ultimate nightmare apocalypse for the Ruby community kind of disaster. I think we missed that by minutes. It was a very close thing. I hate to sound overly dramatic. But it’s kind of like Cuban missile crisis level, as far as software is concerned, right?

JAMES:  Nice. So, the team put RubyGems in maintenance mode and we’ve actually had discussions on this podcast before about what to do when you’re compromised. And the number one step always, as our understanding goes is, get it offline, like shut it down. Was the RubyGems reaction correct? Like, the first thing they did, they just kind of put it in maintenance mode. But I mean, is that okay not really knowing the level of their compromise? And the reason I ask is because Heroku, I don’t think, thought it was okay. They immediately shut off parts of the Ruby stack because they say they were not confident of the current state of RubyGems.

PATRICK:  I have huge amount of respect for the RubyGems maintainers. They’re doing an awesome service for the community. I would hate to cast dispersions on anybody’s decisions particularly when those decisions are made at a time of great stress and easy to pick from 14 time zones away.

JAMES:   I totally agree.

DAVID:  Please end that sentence with ‘but…’

[Laughter]

JAMES:  I totally wasn’t meaning to criticize them as much as to educate what we should do when it happens to us.

AVDI:   Yeah, I’m not trying to kick these guys’ tires.

PATRICK:  Right. Of the two reactions, I would say patterning your own behavior off of Heroku. For example, if I received word that one of my service getting compromised, then I would immediately take my entire network offline until I figured out what was happening. BTW, if any server that you run gets compromised with one of these things and the attacker gains control of that server, assume all the other servers are going to get compromised in a very short order, even if they are patched and everything.

You’re already in total emergency mode after you’ve discovered that an exploit is happening and it’s going to get worse for every minute that you wait. It gives the attacker time to get wormed even deeper into your services/machines/networks/et cetera. And it means that you’re continuing to expose your users to risk. That many of us run applications where people are interacting with them every minute of the day, every password, every credit card et cetera that’s getting traded with a compromised machine is going to be assumed to be compromised, right?

JOSH:  Yes, we did Episode 59 with Rein Henrichs, and we had a long discussion about security responses. So, I’ll refer our listeners to check that episode out. Because the focus of that episode is much more about ‘how do you respond to these things as someone who’s been exploited’ which is a great topic. But I think what we’re talking about here mainly is, what do we, as users of the Ruby ecosystem, have to deal with? Which includes that but it’s also like, how do we just take care of things so that security doesn’t become as much of an issue and we don’t have to have that happening.

JAMES:  Right. So, what you’re saying Patrick about the other machines being compromised, like if you have several servers that communicate with each other and somebody uses one of the exploits to get a root kit on one of them or whatever, then again, tools like Metasploit give them point and click ways to use that machine’s credentials to do something to another machine, right? So, that’s why…

PATRICK:  Right. There’s a variety of ways they can do it. They could do point and click things like Metasploit and just test every piece of software you’ve got running on an open port and see if you haven’t updated anything. They can use SSH credentials that you might have left on the machine to get “legitimate root shells” on the other machines. Basically, after they have access on one thing on your local network, unless you have done absolutely everything right, they’re going to get access on all the other ones too. So, I would bet against having done everything right. Even folks who have more budget invested in security, then all of our businesses will see in their entire operating histories in total revenue, they lose everything when they lose one machine. So, assume you’re going to lose.

CHUCK:  I have to ask and I think I know the answer to this. But I guess every exploit’s different, so there really isn’t a good way of knowing whether or not you’ve been compromised?

PATRICK:  Yes. I hate that answer. But the first part about being compromised is that by definition, it means you can no longer trust what the machine is telling you.

JAMES:  Like the log files and stuff?

PATRICK:  The log files are in the hands of the enemy. Your monitoring code is in the hands of the enemy. Your database is in the hands of the enemy. The terminal that you are signing into to inspect the file system is in the hands of the enemy. There is basically nothing you can do to definitively say that, “Okay. This machine is not compromised,” or to trust that a particular representation machine is giving you is accurate. So, that’s why you take it offline and then start reimaging from source which sucks but there you go.

DAVID:  How can you trust that you really are reimaging from source?

[Laughter]

DAVID:  I hope you’ve got some good news for us because this podcast is making me want to not be a programmer anymore.

[Laughter]

JAMES:  I know, right?

PATRICK:  Right. Ideally, you’re using Git or something and you have your known good copy of your source trade, and you have a known good recipe for chef/puppet or whatever that allows you to build a new machine from bare level. So, you fire that, those known good things against it and spin up another copy of the server. And then, another ideally here, ideally you have a backup copy of the database somewhere that was not compromised. And ideally, your database does not have any executable code in it that will immediately cause it to re-compromise itself as soon as it’s spun out.

JOSH:  [Laughs] And hopefully, no one has compromised your backups.

PATRICK:  Yeah. And hopefully, no one has deleted your backups. So, every time there’s a security problem, you think there before the grace of God goes I, and you should look at your own security set up and think, “Okay, if something like this happened to me, what could I do about it?” And I did that analysis for my own stuff. And I thought, “What happens if I lose the server?” Okay, I’ve got backups. And I thought, “If I lose the server, am I going to lose all the backups?” Yes. Well, that’s problematic and I should probably work on that.

JOSH:  I like the physically disconnected media form of backups. I worked at an organization where every day, they would do backups to a portable hard drive and then disconnect it and put it in a locked closet. Really hard for people to compromise it when it’s locked up that way.

JAMES:  Yeah, really good backup strategy we should mention is basically, you should never have only one source of backup, right? You should always have two, or three is probably better.

CHUCK:  The other thing is that in your backup sources, you want multiple versions. That way, if they manage to compromise one version, you can still roll back to a clean version.

AVDI:  Patrick, after you realized that your backup strategy was insufficient, what strategy did you settle on?

PATRICK:  So, it’s kind of a work in progress at the moment. I’ll give you a little outline of what I do for backups. Most of my services run on VPS’s. And so, I have backups of the source code through the magic of Git in a few places. But backups of the database or whatever are kept on the machine and then the machine is itself backed up, being VPS, there’s a snapshot image of it taken every couple of hours which that sits at the same hosting provider. And I thought, “Oh, if I compromise my account on the hosting provider, I lose everything. That’s bad.” So, I’m going to be looking into doing some sort of hosting to a redundant thing that doesn’t share authentication or anything. I’m kind of looking at Tarsnap because folks who I trust have said good things about it.

AVDI:  Can you say that again?

PATRICK:  Tarsnap, it’s made by Colin Percival who used to be the security officer for BSD. So, a guy who seriously knows his stuff.

JAMES:  And I just want to say that their motto at the top of their page is ‘Online backups for the truly paranoid’.

[Laughter]

AVDI:  And you know, just because you’re paranoid doesn’t mean they’re not coming to get you.

JAMES:  [Laughs] That’s right.

AVDI:  Which is actually completely true in this context.

DAVID:  I actually, just because this is how I role, I recently went and looked up the definition of paranoia. And it’s an irrational fear or mistrust of others. What we are discussing is a perfectly rational fear or mistrust.

[Laughter]

PATRICK:  The Internet is out to get you.

DAVID:  That’s a great slogan, ‘The Internet is out to get you’.

JAMES:  It is. So, maybe taking the conversation in a different direction. Patrick, has this changed your opinion of YAML? Do you believe that YAML is kind of how a lot of this started? Do you believe that it’s a problem that we’re so YAML-centric in the Ruby world that we should look at other tools? Has it changed your opinion?

PATRICK:  I don’t think it’s a problem. I do think we probably need to stop using the stack Ruby standard lib in the YAML de-serializers which allowed de-serializing arbitrary objects and move into a paradigm where we only de-serialize a short white listed objects that we generally consider to be safe like hash array string, integer, .EX and then add a particular call site for YAML.load allow the user to white list other objects that they expect to come out of that particular call site. Or have a global attribution light white list available.

AVDI:  I feel like YAML is just not, I don’t know. It seems like it’s probably not the greatest thing to be using for Internet data exchange. I mean, I love YAML. It’s fantastic for configuration files, also really good for serializing objects locally but not so great for passing messages around maybe.

DAVID:  Ultimately, I think it’s just too powerful, right? I mean, YAML is wonderful because it makes your data look like Python which is great because Python, as a language, is very elegant and clean. I think the real problem is just the arbitrary serialization.

AVDI:  And there are a lot of arguments to be made, not just the security argument but there are a lot of arguments to be made for passing around constrained data types the way JSON does.

DAVID:  The thing that burns me up about this is that I spent a year back in 2009 trying to figure out how to cleanly serialize lambdas so that we could store them in the database. And we couldn’t find a clean way to do it. So, we ended up writing our own DSL and storing that. And now, these a-holes have gone off and figured out how to do it on anything. I might have to look at the exploit source code just to learn something.

[Laughter]

JOSH:  I don’t think they’re serializing lambdas.

JAMES:  Basically, just to explain it very simply, they’re able to construct the YAML documents such that when it’s writing, if you have an object in your system that’s doing a string eval, they can manipulate what’s being eval’d by that string, which does allows them to execute arbitrary Ruby code.

AVDI:  One of the things that I was struck by while I was reading through the exploit documentations was just how circuitous the attack routes are. Multi-steps using obscure, very specifically chosen obscure classes in Rails or in Ruby that happen to implement things a certain way which can then be used to sort of move to the next level of the next step of the exploit. It hurt my head a little. I don’t know if there’s a question in there. It was just sort of humbling to realize there is really no — you really just can’t just look at your code and do like, “Oh, that’s not exploitable.”

DAVID:  Do you remember the buffer underruns from way back when? The limit on buffer underrun is you could only underrun a buffer by 78 bytes. And somebody posted source code within a couple of weeks of 78 bytes of intel-assembly code that would go to the Internet, download a script and execute it which means you now have infinity bytes of buffer underrun.

JAMES:  Yes. We were actually having a discussion at the company I’m working for when these security exploits hit and they have an old legacy system that’s on my Sequel does some not super secure things. And it turns out that is a problem in the case of — not these exploits but just in general, and that because my Sequel prefers to cast everything instead of just failing. Because you can send JSON parameters to Rails, you can do things like send an integer to a field that’s actually a string. And if that integer is zero, when my Sequel casts that string, as long as it doesn’t have a leading digit, it’s going to be zero. So, that’s going to match true for every record in the database. And then using that, because you can manipulate the parameters depending on how the queries are constructed, it’s possible you could do it to return entries that aren’t actually legal.

It kind of speaks to what Avdi’s saying. It requires this flexibility from my Sequel. It requires the way we pass in these parameters and then the fact that we can do JSON means that we can actually get an integer in there where usually, we would get a string with a web request. And it’s just all these things added together, can be used to do bad things.

JOSH:  In some ways, it’s like playing a game and you’re on a quest and there’s all sorts of things that you have to achieve along your way to reach your ultimate goal. And you don’t always know what the next step is until you’ve achieved the one right in front of you. I can see how there’s a certain appeal to solving that kind of problem. It’s not that different from the kind of things that we get paid to do in our jobs every day.

JAMES:  Right.

PATRICK:  I think a lot of folks who are good solid web app developers would be good solid security researchers if they just got a little more perverse in their thinking and applied kind of the same skills to a different end.

JOSH:  So Patrick, do you need to have that kind — I assume there’s some sort of like mindset for hacker… By the way, do you prefer hacker or cracker?

[Laughter]

JOSH:  For somebody who’s like a black hat, who’s going after trying to compromise systems.

PATRICK:  I think society has spoken and they think hacker is that guy. So, call it hacker. Typically, folks call themselves security researchers when they want to sound like they are outstanding members of society.

[Laughter]

JOSH:  So, is the mindset…

[Crosstalk]

PATRICK:  Don’t look at me, I’m just a harmless academic. I totally can’t kill people by typing stuff into a computer.

[Laughter]

JOSH:  Okay. So to be a good guy or to be a security expert or security researcher, does that require you to have the same kind of mindset as the people trying to do the security exploits?

PATRICK:  So, a thing that I often find in people who are very good at this is that they are able to take a look at complex systems of rules and start seeing where those rules don’t quite cover each other. Like if you had somebody who played Dungeons and Dragons and he was an inveterate mix maxer, he figured out that there was one way that nobody ever anticipated to get like a sword of plus 347 of killing living things. And then, enchanted it with something kill on dead things as well. That kind of mentality works very, very well for kind of finding the weak points of code, the weak points of systems.

JOSH:  If you cast a fireball in a tunnel that’s ten feet high and ten feet wide, it fills up 33,000 cubic feet of space. So, you can hit enemies that are 33 grids away.

[Laughter]

JAMES:  That is awesome.

[Laughter]

DAVID:  Even though the range on the spell is only 20 feet.

PATRICK:  I am among my people. Okay.

[Laughter]

DAVID:  My favorite munchkin weapon from D&D was the +3 Sword that’s +6 versus creatures whose names start with J, B, or P.

[Laughter]

JAME:  Nice.

CHUCK:  Awesome.

JAMES:  I want to take the conversation in a slightly different direction. I saw these cool Tweets by Peter Cooper when a lot of this was going down. And basically, what he said was this mainly got started by an exploit on a kind of bizarre feature of Rails. That it takes parameters in XML, okay maybe that one’s pretty understandable. But that it allows you to embed YAML in the XML without getting converted and stuff. I’m pretty sure not very many people knew that feature existed. And I’m assuming the number of people using that feature was a very, very small number.

So the question is, Rails kind of takes this attitude of — I don’t know so much that it’s part convention over configuration although I bet that’s some of it. But it kind of takes this attitude of, “Let’s just turn everything on, everything they might need and have it all work just out of the box.” Is that really the best strategy? I mean, I don’t think anybody would argue that we want to have to go into some kind of config file and turn on active record or something like that. But there’s a lot of feature of Rails.

But, is there, maybe an argument or it might be okay to have us go into a config file and turn on YAML parsing inside of XML? That there would be less open vectors for security attacks and that not a lot of people end up needing that feature, I’m assuming, and I hope it’s a safe assumption. And then, another point Peter made was also, we’re assuming not very many people knew that this feature even existed. If I had to go into a file and look at all the choices, of switches I could flip on, then at least I would know it existed which is something I probably didn’t before. Any thoughts on that? Should we turn everything on by default?

PATRICK:  I think there’s a danger there in that with 20/20 hindsight, if you know that the vulnerability is coming down the pike in XML parsing with embedded YAML documents, then yes. Obviously, you don’t want to be doing that. But that was not obvious when that line of code was written in, I think, 2008. It was probably written to support one Rails app that needed that but that seemed like a great idea at the time. There must be at least a thousand decisions that are equally arbitrary in Rails code base. And if you were to put a thousand ‘if checks’ to check configuration files, I don’t necessarily think that that would put Ruby security in a better place.

First of all, for a lot of these things, it’s not obvious from like looking at code paths that a particular code path is accessible at all. That’s kind of how these are getting discovered. And second, if you went through the massive change of like retrofitting Ruby — sorry, Rails 2.3.X or anything but edge Rails with a thousand extra security features, you would probably introduce about as much surface area for attacks as you’re taking away.

AVDI:  I remember when I first saw the thing about the YAML inside of XML, I thought that is insane. Why would that ever be a thing?

CHUCK:  [Laughs] That’s what I thought too.

AVDI:  But I did my homework last night. And if you look at the incoming parameter parsing, like API parameter parsing, from the perspective of making a system where you can serialize an object and then de-serialize the same object, you can send an object over to some client and then they modify something and then send it back. Well, a lot of Rails code bases do have attributes that are serialized YAML. That’s one of the basic active record options is you can serialize, you can have an attribute which is serialized YAML and that’s what that feature was for. It was for those attributes where you serialize and you say, “I want database column to be some YAML column.” That’s not that uncommon, I don’t think.

PATRICK:  I’ll also cop to having a natural production system in my resume where we put the JSON grab bag in the middle of XML file just because, you know, you’ve been around a big organization before. Somebody in Department X mandates that all systems have to use XML and we needed to ship a particular system in a quarter. And the only way that was going to happen was to do it in JSON that could actually be delivered without having to touch a big freaking Java Enterprise piece of middleware on both sides of two independently managed code bases. So, these things happen in the real world. And I think, ultimately as a kind of like pragmatic choice for programming, Rails does have to meet the needs of apps like that. Question mark.

[Laughter]

JAMES:  That’s an interesting question. I wasn’t necessarily saying we should take these features away. I was more just speaking to the statistical likelihood, like how many people? I mean, right now, it’s a fair question to say, how many people’s Rails apps are accepting XML? I don’t think it’s popular in our current environment. Like I think, we are pretty much settled on JSON as a good inter-process format. I’m not saying it doesn’t happen. I’m sure it does at lots of places like you said Patrick, legacy systems and stuff like that. And some people just have reason to prefer XML for markup-y kind of stuff. But I don’t know that it’s super common as it once was. And I was wondering if it was just statistically speaking. If we lower some of those vectors, if there’s values in that. But I definitely hadn’t considered what you said about the systems to do that introducing additional attack points as well.

PATRICK:  Speaking of which, just so we’re not running on YAML the whole day. XML, by the way, is an incredibly complicated format. And even in the total absence of YAML, I think that if anyone tells you they actually understand everything that an XML parser does, they are lying because it’s beyond the comprehension of human minds. There are ways to basically define tags local to an XML document within the XML document or to override the meaning of particular characters, it gets absolutely crazy. So, if you want to look in your magic eight ball for things that will be discovered in the next year, there’s probably going to be just pure XML attacks against common XML parsers which might cancel, turn it off unless anybody needs it. But, there you go.

JOSH:  One of the things that James, you were talking about enabling features that you might not be using or not. One of the things that I always see in Rails apps is the things like the .XML or the .JS extensions for formats on routes which are — they just come. When you do map resources in your routes file, you just get XML and JSON and whatever else that’s part of the Rails standard these days. And I always look at that and say, “I don’t want formats on my routes!” And being able to get rid of that stuff is actually kind of, you have to look around to figure out how to do it.

JAMES:  And it exposes some data, right? I mean, usually not too painful but sometimes it adds some extra time stamp information that maybe you do or don’t want the outside world to have.

JOSH:  Yeah. So, I think that there’s a lot in Rails that’s set up to just be low friction to be able to use everything. And I guess we’re learning that maybe that’s not the best thing.

JAMES:  I’m definitely biased about this discussion but CSV library that I wrote in Ruby got dragged into this conversation as it was happening. It turns out that CSV also would allow you to serialize Ruby objects. So, I had a method called CSV dump and you could pass tree Ruby objects to it, actually an array, and it would flatten them out into a CSV file. Then there was a CSV load feature that would reconstruct those objects in Ruby. And when doing that, if you construct your CSV file correctly, you could arrange to have certain methods called and pass them what you want. So, you could call system and pass it [inaudible] it was definitely an exploit.

So, a lot of people jumped on that and said that CSV reading is compromised which is totally misguided and overblown. This had nothing to do with CSV’s normal reading and writing system which was fine. It was a side feature that I had experimented with and put in. And then people were arguing — well, people are probably accidentally using that feature because you know the name load isn’t very clear. Again, I think that was pretty misunderstanding the problem. It required a specially formatted CSV file as produced by DOM. So, if you had called load on your normal business spreadsheet, it would have just failed with an error or something because you weren’t passing it the kind of data it was expecting. So, I’m pretty sure nobody was using it by accident.

We did things like code searches on GitHub and found zero uses of it and stuff. I think it was just a toy feature I stuck in at one point and nobody ever used because why would you? So, there was a big discussion and it ended in me just removing the feature. I think it had very little value and I probably shouldn’t have put it in there in the first place. And all of that, I think, is a good conversation to have.

But in the process, people — I was asking questions like, “Are we changing our mind on Ruby should trust you with the sharp scissors,” because that’s always been one of the things about Ruby, right? It does have all these super powerful tools and we trust you, as the programmer, to handle those tools correctly. I was worried that when people were arguing against CSV serialization, that they were saying, “No, we can’t trust people with powerful tools like that,” which I don’t agree with. I do agree that the feature should be removed because nobody was using it. It didn’t have a point. You could use Marshal or YAML or whatever. But are we changing our mind about Ruby trusting us with the sharp scissors?

JOSH:  I think that’s a great question. Another way to, I think, maybe look at that same question is, if you look at a language like Java, security was one of the things that they cared about from the very start. And they’ve put a lot of design effort into the language to make it be secured from the ground up. I don’t think that Ruby has an attitude that, “Hey, we don’t give a toss about security.” But it’s not as big a focus for Ruby. Their safe level is, I think, the only “security feature” that I’m aware of in the language and…

DAVID:  And nobody knows how to use safe levels.

[Laughter]

JOSH:  Yeah.

JAMES:  Yeah. I’ve got to say that it’s hard for me to have confidence in safe levels because Brian, I think, has pointed this out a lot in the past just that turning on a certain safe level has massive changes all throughout the Ruby VM. Are we sure that’s all okay? There’s no problems with any of that? We’re sure? I don’t know.

[Laughter]

JOSH:  Okay. But the attitude of Ruby and the positioning of the Ruby language and everything that goes into it versus security is it’s an incredibly dynamic language. You can do almost anything to your program or someone else’s program in memory with you that you want. You can reopen classes, you can call private methods. You can directly access instance variables. So, it’s very much a do whatever you want to anything kind of language. It’s not like Java where you can freeze things and close and prevent people from inheriting some of your class, and that kind of stuff.

CHUCK:  So, I have to ask then, are you saying that the horse is already out of the barn so there’s nothing we can do about it? Or are you saying that right or wrong, this is where we’re at? Are you agreeing or disagreeing with James?

JOSH:  I think what I’m saying is that the fundamental nature of Ruby prevents it from being as secure as something like Java at the language level. There’s just too much dynamism and ability to change things however you want so that if you want to have something that is secure in your application, you have to have some sort of layer on top of that that imposes a better security and outlook.

PATRICK:  I think that’s bang on. One of the things that I’m noticing working with a .NET team is that they’re used to having to define an interface to everything. And with Ruby, everything is duck typing. And I think it’s a feature of Ruby and you certainly can lock anything down that you want. But if you want to lock something down, you have to explicitly lock it down. Where languages like Java or C#, no. If you want to unlock something, you have to explicitly unlock it.

AVDI:  I have kind of a related question. Looking through these exploits, I got to wondering, is it worthwhile to think about safer coding practices? Just like is it even worthwhile to have some rules or things that we try to avoid and then we try to tell other people to avoid? Like everybody knows evaling on safe data is a no, no. But these exploits were carried out without any explicit evals in the application code or the Rails code. And one of the ways that that was done was using the ability to substitute arbitrary values for instance variables in objects loaded by YAML. You could pick out, you could go find an class in Rails or in Ruby, an existing class which did a send, like a object.send where you’d send an arbitrary method. And it would say — there would be one instance variable which is the receiver and another instance variable which was the name of the method to send. Then they were able to say, “Okay, the name of the method to send is going to be eval.” And there you go. You’ve got your eval.

But that never would have happened if the programmer had used public send there, which public send actually respects public and private boundaries in Ruby. And eval is actually a private method. It’s on every object but it’s a private method. So, you can call it, you call it with no dot in front of it. And so, like I could easily say, and I’m a big advocate of using public send everywhere anyway, like you should always default public send not straight send unless you have a really good reason to do straight send. But is that just totally off base like false sense of security stuff where we could never — there’s no point talking about these rules? Or is it worthwhile having a few rules where you’re probably going to be better off, you’re going to avoid maybe 80% of cases if you avoid certain constructs?

JAMES:  No. I think you’re exactly on. And to be clear, the exploit you just described where you could send any method to any object is exactly what was in CSV. So, one of the things I could have done is just switched it to public send like you said, and I did even consider that. So as far as the send thing, I think the reason that happens is that public send is relatively new. It came in Ruby 1.9, I think?

AVDI:  Right.

JAMES:  And so, I think we just got into the habit of using send for that in the 1.8 and below era and then, we haven’t caught it up and realized, “Oh yeah, I should do…” It’s like you said, I should default to public send and then only use send when it matters.

AVDI:  I still think they should have stuck with making send into public send and just dealing with all the broken applications and then adding a private send. But, water under the bridge.

JAMES:  That was discussed when the change was made in 1.9. Some of us argued for changing the behavior of send so that it would only send to public methods. And the argument from the other side was, that would break a lot of code and that’s true because everybody was using it not assuming that. So instead, public send was acted as the safer version.

JOSH:  It would probably only break mostly tests, right?

[Laughter]

JAMES:  I think they actually did it for a while and it broke like half the standard libraries.

AVDI:  It’s so common to find constructs like a class where you can pass in a hash of options which become instance variable setters and it does it by saying self.set or send to self the key name and then equals sign and then the value.

JAMES:  Like active record does with its constructor.

CHUCK:  Can I get in on this for a minute because I’m wondering if these exploits are any better or worse in 1.8 versus 1.9 or is it strictly Rails?

PATRICK:  So, the process of actually achieving exploit is more difficult on 1.8 than it is on 1.9. I have been told by numerous credible people that there are private exploits that work on 1.8. Whereas Avdi, as a Metasploit framework test and the exploits that have been detailed publicly, all work on Ruby 1.9 exclusively because the Cyc parser is the Ruby standard of default for 1.9. And it just turns out basically on the basis of, the chance of how a particular dozen lines of code were written, that 1.9 is much more easily exploitable than 1.8 is. But no, you’re definitely not safe if you’re on 1.8 and just consider that to be your panacea.

CHUCK:  Interesting.

JAMES:  Going back to what Avdi was saying though, there’s string eval and I think we all know that that’s a judgment call and dangerous. There’s reasons though, like in a lot of the exploits that were used, they could have used define method instead of the string eval they were doing. But in Rails, a lot of times though, you string eval for performance. Define method introduces a performance penalty because the block is a closure and so requires some additional dereferencing. Whereas once the string eval has happened, it’s basically a normal method like you had just written that method, right? And so because of that, there’s no long term penalty for it. So sometimes, they prefer that construct in something like Rails which needs to run as fast as possible through that code. There’s reasons to use those tools, basically.

PATRICK:  I think there’s a bit of a danger in that. It’s definitely worth noting to avoid dangerous constricts in your code. But for a lot of these things, it’s just a unique pattern of circumstances that actually make it dangerous. Like if you were to say, “Try to avoid the meta-programming features of Ruby for classes which touch user input.” Then name group collection set would not obviously be a class that touches user input, right? Because you think when you’re writing that class that it’s only ever going to be parsing get called by the framework as it’s parsing the rootstat rb file. And that’s all been okay’d by the programmer. So, everything’s totally good and nobody’s ever going to do something stupid like try to create new routes at runtime.

But then surprise! Surprise! If some other code that was written by somebody on a different continent years after a name group collection set will allow you to instantiate name group collection set arbitrarily, then bad stuff happens. So yeah, it’s tough.

In general, I think there’s a lot of value in security and rules of good coding practices and whatnot. And that we should try to discover that as a community. But a lot of the off the cuff, “Well, if they just did it like this,” doesn’t work out as well as people think it does.

CHUCK:  So, are there any other aspects of this that we want to talk about before I ask the other question that I have. And I think Avdi kind of asked it. But I want to make sure that we’ve covered the exploits that are out there before I ask my question.

DAVID:  Exploits that are out there?

CHUCK:  Yeah, the ones that have been disclosed.

[Crosstalk]

JOSH:  Why don’t you ask your question, we can ask more afterwards, right?

CHUCK:  That’s true. Yeah, okay. So my question is, what about my code? These are things that kind of globally affect people and they’re — that’s why they’re so critical is because they affect hundreds or thousands or however many websites on the Internet. But my website, how do I avoid putting vulnerabilities into my code? How do I avoid writing code that is vulnerable to attack?

PATRICK:  So, there’s a variety of good resources that can teach you the kind of common coding patterns that cause problems. Not so much coding patterns, but usually architectural problems in code. For example, sequel injection vulnerabilities are one of the, even though Ruby on Rails will protect you from them to a large extent, they’re one of the huge things that’s discovered in virtually every pen test. A lot of people just kind of use the same code that they’ve used on the last six projects for things like doing user session management or password resets or that sort of thing. And those are very easy to screw up in lots of wonderful, wonderful ways like allowing people to password resets for admins. Having your admins get control over a single application is, as the rest of the users, is also kind of a pattern that you want to avoid.

And that stuff and more, you can find out on the OWASP list of common vulnerabilities and also in books that you mentioned. I’ve got a recommendation for a book on web security which I thought we were going to save for the picks section. But since you’re asking for it — okay, where’s my notes? It’s called ‘The Tangled Web’ and it’s by a gentleman whose name I might mispronounce, Zalewski. It’s a very good primer that covers everything from very high level, not a lot of detail, down to okay, there’s ways to exploitably turn particular sequel code into injections and it walks you through doing it. Not Ruby Rails specific, these are endemic to all web applications. And so that will — even if you aren’t very interested in security, I guarantee that you will have doors opened for you by that book.

CHUCK:  What was that list of vulnerabilities that you were talking about?

PATRICK:  The OWASP Top Ten List. They publish and do one every year.

CHUCK:   Alright, cool. Are there any other resources that you can recommend to us before we get into the picks?

PATRICK:  Not off the top of my head. I’m not really a security researcher. I just play one on the Internet. That’s my security researcher buddy did give me a list of three books to read. And I promptly did not read them and instead started to think about coding applications but ‘Tangled Web’ was really awesome.

CHUCK:  Okay.

DAVID:  That’s your advice kids. Security is a thing, go write your apps.

[Laughter]

[Crosstalk]

PATRICK:  It’s a trade off, right? The most secure application is one that’s running on a computer which is powered down and disconnected from the Internet. And that’s not something that shoots business ends at most of our companies. We have to tolerate some level of risk. While at the same time, it’s equally irresponsible professionally and not good things for our businesses if people are getting compromised left and right from using our applications.

JAMES:  I’m totally making that argument to my next client.

CHUCK:  What? Shut down the server?

JAMES:  Yeah, we’re not putting this thing on the Internet. Do you know how dangerous it is out there?

[Laughter]

PATRICK:  There was honestly. You could have made a credible case for one that patches dropped saying, “We should take the website down right now and just figure out where we are, give it an hour or two,” and have the sudden hour of the outage be less dangerous than the prospect of the app being remote compromised in that hour. I think that would have been, at many businesses, a very responsible thing to do.

JOSH:  I know that Rein talks about — when we did that episode with him, he talked about how if your system is compromised, just shut it down because you have no idea how bad the damage is. And every minute that it keeps running, you could just be making things worse. And if your box got rooted, then there’s just really no point in trying to repair it. The best you can do is burn it down and create a new one.

JAMES:  Josh, you have an exciting new security resource for us, don’t you?

JOSH:  I do. This is a great moment for me to mention it too and especially Chuck was asking, “What can we do about our own code, our own application code? How do we make sure that we’re doing a good job with security there?”

Well, a friend of the show, Bryan Helmkamp, the creator of Code Climate, is right on top of things. I was really impressed. Talk about good timing. Obviously, the security stuff has been going on for a month or two so he didn’t whip this out overnight. But Code Climate has this new feature called Security Monitor which looks really good. Just like Code Climate scans your app and tells you bad programming practices in your code, they have this new security monitor feature which scans your code and looks for bad security practices. I hooked this up to the application that I’ve been working on lately and found a half dozen security issues with my code like mass assignment problems and redirecting to a URL that was provided to me by a user, having the default routes running, and things like that.

So, it’s really cool. It looks like there’s a couple dozen types of security vulnerabilities that it stands for. One, two, three, four, five, six, seven…yeah, like about 20 or 25. So, it’s really cool.

And Bryan, he’s a fan of the show. So, he’s offered us a discount code for our listeners to be able to sign up for any Code Climate plan that includes Security Monitor, you get half off your first three months and they get early access to Security Monitor. Security Monitor looks really good. This is just the first cut of the feature. So, I’m sure it will be getting better too. I’ve already given him some feedback that he’s going to incorporate and I’m sure that other people are too.

So, the code for that is RRSEC13. And I’m not sure how long that code will be good for, but I’m sure if you use it in the next couple weeks, it will probably be fine.

CHUCK:  Awesome. Alright, let’s get to the picks. James, what are your picks?

JAMES:  I’ve got two. I read the Pragmatic Magazine on and off sometimes, PragProg it’s called. And there was a good article a while back from Andy Hunt about estimation that I really enjoyed. And it was basically about how the worst thing about estimation is that we basically begin with an infinite set of choices which is always really hard. So, he advocated basically only allowing yourself like four choices. One hour, one day, three days, one week; and that, you can only work in those four choices. And then the constraint obviously, as in most things, programming ends up making things better. So that was a cool article when I read it and I really enjoyed it.

But in this most recent edition of the PragProg, there’s an article by Ron Jeffries called ‘Estimation is Evil.’ And that’s not actually a link baiting title. It is pretty much exactly the argument he makes is that estimation is evil. It does a lot of harm and it’s probably not a good practice for us to engage in at all. That was really eye-opening for me and I really enjoyed it.

So, if you’re one of those people that struggle with estimation like I do, you should definitely go read these two articles to give you some new ideas and stuff about how maybe not even to do it because it’s probably not a good idea. That’s it. Those are my picks.

CHUCK:  Awesome. Josh, what are your picks?

JOSH:  Okay. By the way, I just got clarification from Bryan saying the discount code is going to be good for two weeks from the publication date of this podcast which I guess is on the 20th. So, it will be through the end of February or early March something like that. I can’t do math right now.

So, picks, okay. I actually have a code pick. Yehey! We’ve picked Stripe before. I think I might have even mentioned Stripe Checkout before. But I’ve now been using it in Unger and I really like it. So, if you need to do payment processing and you want a snazzy UI to capture the customer’s credit card data and submit it to Stripe and get your token back for you, you can do it without doing your own form stuff anymore. You just drop in a couple in JavaScript into your view and you’re done. It’s really, really easy. There’s even some ways that you can customize the display. So that’s pretty cool. I like that a lot.

Then I have a couple fun ones. I think a while ago, I picked the Fashion It So, the Star Trek Next Generation Fashion Blog.

[Crosstalk]

JAMES:  I think that was awesome.

CHUCK:  That was funny.

JOSH:  So from that, I discovered a new Twitter which is Trek and the City. Somebody figured out that Kim Catrall played Samantha in Sex In The City and a Vulcan in one of the Star Trek movies. I haven’t seen those movies in so long. I forget which one it was. But Kim Catrell was in both shows. So, they made a tumblog about the intersection of those two shows. And it’s basically, if you’ve ever watched Sex In The City, Carrie, the Sarah Jessica Parker character, does all of these narration bits where she talks about stuff. So, this is basically like, imagine Carrie is reading one of her intro bits and they’re just like — if you can imagine her saying it, it’s like, “Oh, okay. This totally works.” So, it’s just basically, imagine the confluence of those two worlds.

CHUCK:  According to IMDB, it is Star Trek 6, the Undiscovered Country.

JAMES:  It’s the Klingon one, right?

JOSH:  Yeah.

CHUCK:  Yes.

JAMES:  Spock comes up, mind moving her. It’s great.

JOSH:  Yeah. I knew it was Undiscovered Country. I just couldn’t remember the number. Okay. So, there’s that one.

And then, I found something last night which was just kind of amusing which is another Tumbler. It’s femalesoftwareeng.tumblr.com. And it’s just a lot of ironic commentary. And I’ll leave it at that. Okay. So, that’s my picks.

DAVID:  It’s freaking hilarious.

CHUCK:  It was funny.

JOSH:  [Laughs] Yes. Okay. That’s it for me.

CHUCK:  Alright. Avdi, what are your picks?

AVDI:  Alright. I’ll start with a development pick. There’s an article by Steven Jackson about pair programming. And if you’re a long time listener, you know I’m pretty big into pair programming. I do lots of remote pair programming with lots and lots of different people. I really love this article because it basically is a very honest and detailed look at somebody’s experience going from not doing any pair programming to doing a fair amount of it. And really hits the nail on the head for like, if you’re still in the camp of this pair programming stuff sounds crazy. I hear a lot of people that are just like, “I don’t know. I’m not sure about pair programming stuff. I’m cool with the other Agile practices but this pair programming stuff, I don’t think that’s for me.” Read this article if you’re in that camp. I think it really covers why it’s worth trying.

Alright, non-development stuff. I have been watching House of Cards on Netflix and it is some of the best television I have ever seen. I haven’t finished it yet but freaking great show, really well done. The directing is great, all the actors are great. Of course, Kevin Spacey, wonderful. If you get Netflix, totally check that out.

Let’s see. What else? There’s this food blog. It’s kind of a food and photography blog or photography of food blog called White on Rice Couple. It’s WhiteOnRiceCouple.com. And this isn’t actually something that I read but I was enjoying some amazingly delicious food last night prepared by my wife from a recipe picked by our teenage daughter. And I was like, “Man! That was odd or surprising but really good.” It was like Brussels sprouts with Sriracha and Mint, and amazingly good. And I was talking to my teenage daughter about it, and she said, “Yeah, I picked that out.” And I started talking about some of the other dishes that I really enjoyed that she’s either made or picked out the recipes for. I have the great privilege of living with a couple of stellar cooks. And she said, “Yeah, those are all from the same blog, this White on Rice Couple blog.” And it’s these two people from very different backgrounds. One is, I think, an Asian background, some like a Thai background and the other is from a Cajun background. And I haven’t really gone through the site much. Like I said, I’ve mostly just enjoyed the recipes that have apparently come from it. But if you’re into cooking, these are really good recipes.

CHUCK:  Alright. David, what are your picks?

DAVID:  Last week’s episode kind of cleaned me out for picks but just really quickly ‘How to Tell if Your Cat is Plotting to Kill You’ by The Oatmeal. It’s his most recent book. TheOatmeal.com is a fantastic comic. It’s a little bit on the edge of not safe for work as far as language, but it’s just a hilarious comic and he’s put all of his cat comics together into a single book. You can get it in paperback or in Kindle. But honestly, the paperback is like $0.78 more and the Pullout Poster on the Kindle really just doesn’t work well. So, that’s my pick. It’s the funniest book I’ve read this year so far.

CHUCK:  Awesome. So, I only have one pick. It’s been kind of a crazy week and I haven’t had a lot of time to try new stuff. The pick is the Disney ‘Where’s Perry’ app for the iPhone. It’s one of those, you have to puzzle your way through how to solve the thing and free the platypus. It’s a fun game. My wife was playing it and so, I’ve been playing it. It’s based on the Disney show Phineas and Ferb which is also a funny show. So, I guess I’ll pick the show too. But yeah, a lot of fun.

Patrick, what are your picks?

PATRICK:  I wasn’t really conversant with the picks thing prior to doing this. So, I picked ones that are just on security. We talked about Tangled Web earlier and there’s also the open source project that does kind of code linked for you to find obvious points of problems, and it’s called Brakeman. And it’s at BrakemanScanner.org. I haven’t used it myself but it comes highly recommended.

CHUCK:  Alright. Well, we’ll wrap up the show. Go to Code Climate and get the security feature, the Security Monitor. Also, you can go to RailsRampUp.com and sign up for my Ruby on Rails course. I know Avdi is still doing Ruby Tapas. So, go check that out because it’s awesome.

JOSH:  And everybody get on Ruby on Rails security Email list.

CHUCK:  Yes, absolutely. And finally, you can sign up for Ruby Rogues Parley by going to Parley.RubyRogues.com.

JAMES:  Great balls of fire!

CHUCK:  Alright. We’ll wrap this up. Catch you all next week.

1 comments
benmmurphy
benmmurphy

I have to disagree with Avdi about public send. If you read tenderloves blog post changing send to public_send everywhere is not going to fix the problem. There is so much dynamism in ruby that if you can construct arbitrary object graphs then you can create a chain that will end up evaluating code. The problem is creating arbitrary object graphs from untrusted sources. 

Trackbacks

  1. […] operations such as sorting or statistical tests on the output. However, as James Edward Gray says, Ruby trusts us with the sharp scissors, so if you were to host such an interface on your own […]

  2. […] The recent outbreak of security problems in the Ruby and Rails worlds have led to a flurry of activity, as well as descriptions of what went wrong. Patrick McKenzie has written a particularly thorough blog post on the subject, and he described these problems in an interview on the Ruby Rogues podcast on February 20, 2013. You can listen to that at http://rubyrogues.com/093-rr-security-exploits-with-patrick-mckenzie. […]

Previous post:

Next post: