The state of Open Data 2014

I was reading an old blog post I wrote in 2011 about the state of Open Data in BC ( http://www.unrest.ca/2011-state-of-open-data ) and thought I’d pen another annual update. I should do this every year, but sadly, I’ve not had the time to really blog lately.

In 2011, I highlighted rights and privacy clearing, cost recovery, scaling and license diversity as major failures and opportunities for course correction in the emerging open data field — and I’m sorry to say, many of these problems materialized.

But we’ve also had a lot of successes since 2011 — the Open Data Society of BC (disclosure: I’m a member of the board of directors) has held two highly successful Open Data Summits that have convened hundreds of people from across Canada and even the world to talk about Open Data. My favourite memories of these events were the edgy talks, like Pete Forde’s entitled ‘People are dying to get the data’ ( https://www.youtube.com/watch?v=s7rpKYSZUDo ) because they really bridge a gap between the civil service and the data activism that is occurring all over the web today. These events help bring together people who would otherwise never meet, and invite them to learn from each other.

The Data BC group of the provincial government has been doing a great job with what limited resources they have — in the last couple year’s they’ve facilitated the publishing of unprecedented transparency/accountability information in the form of fiscal data, the personal information directory and geographic data that has been hugely helpful to a number of stakeholders.  They’ve done considerable work on licensing and on trying to source data — even where it doesn’t exist. I’ve come to like and respect the work they’ve done for BC in a challenging environment.

But there’s a problem in the foundation of this group as well — they don’t have a budget to replace funding for datasets that are currently being charged for (the cost recovery problem), they don’t have the statutory ability to command data release from other ministries, and they don’t have the resources needed to implement the commitments made in the G8 Open Data Charter — especially the transformative commitment to an ‘Open By Default Policy’. This fix will have to come from cabinet, take the form of significant budget increases and involve the creation of a legislative framework. Moreover, the architecture of data release will have to change — a central team fetching data for a portal wont scale. Data release has to be federated within each ministry, and just as each ministry has an officer responsible for handling FOI requests, so too should they have one to handle data requests. Its 2014, its time to make data exchange as seamless and as common as email in the public sector.

The lawyers are also hurting the economics of open data — while much progress has been achieved on licensing, there are still very real debates about conformance to the Open Definition and serious problems with the legal disclaimers of liability for intellectual property and privacy rights clearing. It is my belief that these issues are hurting commercial investment in Open Data.

Across the country, other groups are also making positive progress — the Federal Government included a large funding commitment for Open Data in their 2014 budget, they’re hosting hackathons (which they misspell as appathon [because hackers = bad of course]) and their MP Tony Clement is taking every opportunity to talk about the benefits of open data and the future promise of a more transparent public service. Major wins with digitally filing access-to-information requests, and citizen consultation exists in this area. The publication of valuable datasets like the Canada Land Inventory and Border Wait Times are also impressive.

There too, there are big failures. Canada Post is suing innovators over their use of Postal Codes ( CBC Story ) and DFO’s hydrographic data remains closed and mostly collecting dust. The government seems to be ignoring responsibility for Canada Post’s behaviour, but most will point out that they have jurisdiction over the Canada Post Corporation Act and could make a simple and common sense legislative change to resolve this embarrassment to our federal open data commitments.

We’re making progress municipally — the City of Vancouver has made amazing strides in digital archiving, making digitized archives available on the local intranet in a unique and groundbreaking way that deals with intellectual property concerns. The City of Victoria has embraced open data, they launched VicMap (making their cadastral data open), began webcasting council meetings and published an open data portal. They even hosted a hackathon with Mayor Dean Fortin and Councillor Marianne Alto helping the Mozilla Webmaker folks teach children about digital literacy and creating the web [ link ]. The City of Nanaimo continues to lead the pack with realtime feeds, bid opportunities, maps of city owned fibre resources, digitally accessible FOI processes and so much more.

In the private sector and ngo space there are so many notable projects — the GSK backed Open Source Malaria project being my favourite. There are also successes like Luke Closs' and David Eaves’ recollect.net in the civic app space.

The hacker space is also seeing some success with proof-of-concept prototype applications developed by citizens at hackathons going on to inspire civil servants to create their own versions and publish them widely. The BC Health Service Locator App and the Victoria Police Department App both get credit for listening to citizen input.  Other apps have been created and have seen little to no uptake, like those developed to help citizens better understand freshwater fishing regulations (mycatch), or storefront display apps to help the homeless find shelters with available space (VanShelter). The next steps here are clearly to create bidirectional projects that allow both civil servants and citizens to work collaboratively on applications together using the open source methodology. (Who wants to be the first to get the Queen in right of British Columbia into using GitHub?)… Other projects have failed to find traction due to lack of, or bad quality data. My OpenMoonMap.org site is failing, due to unreliable WMS-only access to data from NASA which is down more often than it is up… the lesson here, online services are no replacement for downloadable primary source data. mycelium.ca (My house of commons video service) is in its 7th year of operation, and continues to prove that even simple prototype apps can be useful and long-lived, drive policy change (House of Commons Guidelines) and find feature uptake (Hansard now has video clips embedded). Hopefully same-day recording download, clipping and linking will be added to ParlVU and this app will no longer be useful.

For the coming year the Open Data Society of BC is crowdsourcing its agenda and I’d encourage you to participate in that discussion and to join or support the society.  via OpenDataBC-Society

I know I missed some people and agencies who are doing great things, so please leave comments if I missed you. (tweet me @kevinsmcarthur for an account as I dont monitor this site’s admin pages often)

(Updated) Evaluating the security of the YubiKey

The folks over at Yubico have responded to this article, and I'm happy to post their letter. It gives a little additional context to the issues I presented and a critical 'other-side' response. I'm happy to see the company actively engaging and addressing the issues really quickly.  There's a couple bits that need clarification. For example the nonces I point out are actually used in other places than inside the synclib, and the 'facilities' issues re the las vegas 'shipping center' were purposefully left vague of the full detail to avoid exposing what appears to be a residential address.

-- Yubico Letter --

At Yubico, we enjoy reading articles from security experts such as yourself, and we appreciate the visibility you provided Yubico through your detailed evaluation of our Yubikeys. Our security team at Yubico takes your assessment very seriously, but there are some clarifications and intricacies that we wanted to share with you that we’re confident will convince you that the Yubikeys offer the highest grade of enterprise security in a comparative product class. Please feel free to contact us if you have any further questions/comments…

The Protocol.

- The Yubikey personalization app saves a .csv logfile with the programmed key values meaning a malware-based attack may discover the log files on block devices even when the files have been deleted

In the most popular scenario, customers choose to use YubiKeys configured by Yubico, where the cryptographic secrets are generated using the YubiHSM’s TRNG programmed into YubiKeys in Sweden the UK or US at our programming centers that use air-gapped computing [at least 1 air gap between the programming station with its control computer and any network]. The plain text secrets database generated on the YubiHSM is encrypted by the customer’s validated Public PGP key, signed by the programming station’s control computer’s secret key, the plain text file is zapped in place and the secrets securely deleted from disk and memory on the programming station. At Yubico, we call this the “Trust No One Model”!

The Yubico personalization app provides customers the flexibility to program their own keys at their convenience. Yubico does acknowledge that customers programming their own keys may not be aware of this risk that the AES keys are in the .csv file and we are working to change the default behavior and provide additional warnings to inform users of the potential risks.  

- Replay prevention and api message authentication is implemented on the server receiving the otp — this has resulted in a number of authentication attacks like https://code.google.com/p/yubico-pam/issues/detail?id=18 which are now corrected in later versions of the protocol. The design, however, trusts the server with the authentication security and thus presents a problematic architecture for shops that do not have crypto-aware developers and system admins able to verify the security setup is working as intended

As you have astutely observed, we’ve fixed the issue you’ve seen in later versions of the protocol. For customers who don’t have adequate crypto-aware developers and system admins, to secure authentication servers should work with solutions from our trusted partners.

- The replay prevention is based heavily on the server tracking the session and use counters and comparing them to a last seen nonce. It also depends on application admins comparing public identities. This should ensure that the keys cannot be intercepted and replayed. Some servers do not properly validate the nonce, the hmac or properly protect their databases from synchronization attacks. Some implementations do not even match the token identities and accept a status=ok as a valid password for any account. (try a coworkers yubikey on your account!). The weak link in the Yubikey-with-custom-key scheme seems to be the server-application layer.
- The Yubikey protocol when validating against Yubico’s authentication servers suffers from credential reuse. It is vulnerable to a hostile server that collects a valid otp and uses it to login to another service, and its vulnerable to hacking or maliciousness of the authentication severs themselves. You are delegating the access yes/no to Yubico under the cloud scheme.

Customers who are concerned about using the YubiCloud infrastructure for YubiKey OTP validation should consider implementing their own authentications and validation servers. Yubico provides all the necessary server components as free and open source code. Customers may also chose to configure and use the YubiKey with own OATH based authentication servers. 


The code.

- The yubikey-val server contained a CURLOPT_SSL_VERIFYPEER = false security vulnerability in the synchronization mechanism.

We have fixed this issue about certificate validation when using https for sync between validation servers. We do however want to point out that the vulnerability had only limited repercussions. This case is closed on GitHub, https://github.com/Yubico/yubikey-val/issues/15

- The nonce and other yk_ values however were not and could be modified by a MITM attack. The attack presents a DoS attack against the tokens (by incrementing the counters beyond the tokens) and possibly a replay issue against the nonces. — however replay concerns require further study and I have not confirmed any exploitable vulnerability.

The nonce used in the ykval protocol from the php client is predictable but it is unclear if this is an issue. We will be doing further review to address any possible exploit vectors if they exist. The case is still open https://github.com/Yubico/php-yubico/issues/5

- There were also instances of predictable nonces. The php code ‘nonce’=>md5(uniqid(rand()))) is used in several key places. This method will not produce a suitable nonce for cryptographic non-prediction use.

The server_nonce field is only used inside the synclib code to keep track of entries in the queue table so we deem this as acceptable. We provided further explanation on GitHub https://github.com/Yubico/yubikey-val/issues/14

- The php-yubico api contains a dangerous configuration option, httpsverify which if enabled will disable SSL validation within the verification method.  Again defence-in-depth approach protects the transaction, with the messages being hmac’d under a shared key, and mitigating this as a practicable attack.

We are working to resolve this issue that was highlighted in order to provide the defense-in-depth protection to eliminate the possibility to turn off https certificate validation in the php client. This case is still open, https://github.com/Yubico/php-yubico/issues/6

- The C code within the personalization library contains a fallback to time-based random salting when better random sources are not available [ https://github.com/Yubico/yubikey-personalization/issues/40 ] however, I cannot figure a time when a *nix-based system would not have one of the random sources it would try before falling back to time salts.

We made the relevant changes to the code and addressed this issue about salting when deriving a key in the CLI personalization tool on windows. Thank you for pointing this out. https://github.com/Yubico/yubikey-personalization/issues/40

The Neo

- As a result, for both personalization issues and third party software, these modes aren’t useful in a real-world deployment scenario, but may be useful to developer users. That said, other smartcard solutions support RSA signing in a smarter way than layering the OpenPGP application on a JavaCard, so are likely a developer’s choice over the YubiKey Neo in CCID mode.

We don't quite agree your analysis of our NEO product and want to point out that there is a distinction between YubiKeys use for development work versus production roll-outs. We intentionally allow users to re-configure the keys and this allows for possible attack vectors but this does not mean the product or protocol is insecure. In a production roll-out, most of our customers choose to protect their Yubikeys with a configuration password.

Therefore, for the NEO, we allow development users to turn on and off the CCID functionality as they choose. In a production roll-out, this function is pre-configured and protected. In addition, It is not obvious to us what makes the NEO in CCID mode less usable than any other CCID + Javacard smartcard product implementation. We have also introduced a PIV applet that allow for all flavors of PKI signing including RSA.

Logistical Security

We respect the due diligence done by you to find out about our locations through public information available on the web. However, although Yubico resides in a shared facility which houses other popular internet companies, we have a dedicated office which is accessible by authorized Yubico employees only. Our current focus is on delivering products with the highest level of quality and security, and the big corporate office will come soon ☺.

Just-In-Time Programming & Efficient Packaging offers Yubico a competitive advantage:
Because of the size of the YubiKey and our unique packing technology, 1 operator and the Yubico Just-In-Time Programming Station can program over 1,000,000 YubiKeys per month.  YubiKeys are handled in slabs of 10 trays of 50 YubiKeys [500 YubiKeys in total] with 4 slabs per 10kg box [2000 YubiKeys].  A pallet of 100,000 YubiKeys weighs less than 500kg. Therefore, Yubico logistics and programming can be performed in facilities that are not available to other authentication hardware companies. Our logistics and programming team have all been with Yubico for more than 5 years and are among our most loyal and trusted employees. We pay particular attention to the security of our programming centers, and update our processes consistent with the advice of our market renounced security experts.  


Thank you!

 

-- Original Article Below --

Every so often I get to take a look at a new security device with the hopes of replacing our existing PKI-based systems, which while very secure are an administrative nightmare and dont lend themselves to roaming profiles very well. This time it’s the Yubikey Nano and Yubikey Neo devices from Yubico that I’m evaluating.

All Yubikey devices support a custom OTP generator based on a home-rolled OTP implementation. There’s a pretty good lack of formal proof on this scheme, but the security boils down to an AES 128 shared secret design, which is reasonably secure against in-transit interception and key leakage. This appears to be the primary threat model the Yubikey is trying to protect against and may be its most secure property.

The Protocol.

The protocol appears to have seen limited review against privileged attacks, and presents a number of security concerns, including:

- The ability to reprogram the devices to a chosen key before account association due to default configurations being shipped both programmed and unlocked. Users are warned against reprogramming their devices as they will lose the API validation ability, however an upload mechanism exists to restore it. Check to see if your key starts vv rather than cc when using the Yubico auth servers as this may indicate reprogramming.

- The Yubikey personalization app saves a .csv logfile with the programmed key values, meaning a malware-based attack may discover the log files on block devices even when the files have been deleted. Needless to say, with the AES keys from the CSV, the security of the scheme fails.

- Replay prevention and api message authentication is implemented on the server receiving the otp — this has resulted in a number of authentication attacks like https://code.google.com/p/yubico-pam/issues/detail?id=18 which are now corrected in later versions of the protocol. The design, however, trusts the server with the authentication security and thus presents a problematic architecture for shops that do not have crypto-aware developers and system admins able to verify the security setup is working as intended.

- The replay prevention is based heavily on the server tracking the session and use counters and comparing them to a last seen nonce. It also depends on application admins comparing public identities. This should ensure that the keys cannot be intercepted and replayed. Some servers do not properly validate the nonce, the hmac or properly protect their databases from synchronization attacks. Some implementations do not even match the token identities and accept a status=ok as a valid password for any account. (try a coworkers yubikey on your account!). The weak link in the Yubikey-with-custom-key scheme seems to be the server-application layer.

- The Yubikey protocol when validating against Yubico’s authentication servers suffers from credential reuse. It is vulnerable to a hostile server that collects a valid otp and uses it to login to another service, and its vulnerable to hacking or maliciousness of the authentication severs themselves. You are delegating the access yes/no to Yubico under the cloud scheme.

The code.

Yubico has taken a radical transparency approach and published all their source code on https://github.com/Yubico…. despite what follows below, this approach should breed confidence in the product over time. When compared to closed-source products, the Yubico product would appear to have the leg up when it comes to identification and correction of security flaws. They are also taking a defence-in-depth approach to API security by signing and checking values even over TLS links. However, while this mitigates a number of coding concerns, it may introduce new concerns and I remain concerned about the use of a hmac signature, over user controllable data, encrypted under TLS as it is a case of a mac-then-encrypt scheme.

I did some basic analysis of the PHP and C code, and found a number of concerning items. The yubikey-val server contained a CURLOPT_SSL_VERIFYPEER = false security vulnerability in the synchronization mechanism. Thankfully the developers had taken a defence-in-depth approach to the API and session and use counter were restricted from being decremented. The nonce and other yk_ values however were not and could be modified by a MITM attack. The attack presents a DoS attack against the tokens (by incrementing the counters beyond the tokens) and possibly a replay issue against the nonces. — however replay concerns require further study and I have not confirmed any exploitable vulnerability.

There were also instances of predictable nonces. The php code ‘nonce’=>md5(uniqid(rand()))) is used in several key places. This method will not produce a suitable nonce for cryptographic non-prediction use.

The php-yubico api contains a dangerous configuration option, httpsverify which if enabled will disable SSL validation within the verification method.  Again defence-in-depth approach protects the transaction, with the messages being hmac’d under a shared key, and mitigating this as a practicable attack.

The C code within the personalization library contains a fallback to time-based random salting when better random sources are not available [ https://github.com/Yubico/yubikey-personalization/issues/40 ] however, I cannot figure a time when a *nix-based system would not have one of the random sources it would try before falling back to time salts.

Logistical Security

I also took the opportunity to look at the security of the Yubico logistics process, and came up with a number of questions, not the least of which was that my Yubikey was apparently shipped from a residential address in Las Vegas Nevada. This gives me pause with regard to Yubico’s claims that the keys are “Manufactured in USA and Sweden with best practice security processes”. I have questions about the chain-of-custody of the keys.

Public sources investigation into the addresses provided on the Yubico site suggest the addresses are shared with other firms and seem to be co-working spaces rather than the typical corporate offices as one would expect of a company providing security products to companies like Facebook and Google.

The Neo

The Yubikey Neo includes a JavaCard based CCID/smartcard device and the ability to support the OpenPGP app. In testing this is clearly beta software and requires kludgey ykpersonalize commands with arguments like -m82. Its obviously not intended for widespread use, and as such it was discounted over more mature ccid products.

Some of the YubiKeys also provide a challenge/response and hotp/totp clients. They’re implemented via middleware that isnt first party and is commercial software on OSX and requires users to learn hotkeys to use.

As a result, for both personalization issues and third party software, these modes aren’t useful in a real-world deployment scenario, but may be useful to developer users. That said, other smartcard solutions support RSA signing in a smarter way than layering the OpenPGP application on a JavaCard, so are likely a developer’s choice over the YubiKey Neo in CCID mode.

Summary

In the end, I cant recommend the Yubikey to replace our PKI systems. The Yubico authentication server api represents a serious information leak about when users are authenticating with the service, and puts Yubico in a trusted authenticator position. It’s also the worst form of credential reuse, and a hostile/malware infected server can compromise unused OTPs and use them against other services.

What this means is that when using the default, pre-programmed identity, if someone were to break into any Yubikey-cloud based system we use, they could break into all of our systems, by collecting and replaying otp’s that have not yet authenticated with the cloud. While you’re using application x, its using that credential to log into application y and is pilfering data. Despite telling them not to, most users reuse passwords across services, and so the login credentials from one service will usually work with another, and the attacker has everything needed for successful login. This is in stark contrast to PKI based systems that always implement a challenge/response using public key cryptography. This also applies to any corporation using pre-programmed keys in a single-sign-on type scheme, as any of the hacked application servers can attack other servers within the sso realm.

Yubico’s or our own loss of the API keys (not the token’s aes key) used for server-to-server hmac validation would also silently break the security of the system.

Because of the problems with the authenticator service architecture, we would have to program the keys into a single security zone, significantly weakening our currently partitioned services (currently multiple pki keys can be stored and automatically selected via certificate stores and/or pkcs11). In the best case scenario, this would leave us to program the Yubikeys ourselves and ship them out to end-users, adding significant costs to an already expensive device and weakening our security architecture in the process.

In short, the device’s default configuration is not sufficiently secure for e-commerce administration and the pre-programming of the device is not financially viable due to shipping and administration costs. The sso architecture creates a single security zone across many services and this is not desirable or best practice.

I will continue to seek out and evaluate solutions that offer PKI-level 2nd factor authentication security without the headaches of administering a production PKI.

Setting the record straight on Halifax E-voting.

There is currently a story making the media circuit on electronic voting in the Halifax municipal elections. This is the story of that election and how this information became known, and what remains hidden behind responsible disclosure today.

In September 2012 I learned that Halfiax was going to be using e-voting and that they had been making claims about the security and viability of online voting – and so I reached out to colleagues in the security community to see if anyone had done any security evaluation of this evoting solution. 

The response I got back was that no one had done the research because there were concerns about the climate for this type of research. For example, just watching a voter cast their vote, could be considered an election offence in some jurisdictions. So I decided to do some basic 'right to knock' type research before the election was open rather than investigate during the voting period. I simply checked out the publicly facing voting instructions on the municipal website and visited the vote.halifax.ca website to see what security they were presenting to would-be voters. For example, was it presenting an identity validated EV SSL certificate? I did some other basic security checks that didn’t require anything more than loading the webpage and looking up details in public registries. To my surprise, the voting portal had been setup by the middle of September (presumably for testing), and there were a number of items I found concerning with the implementation I was seeing.

So I wrote it up, and sent it over to CCIRC (The Canadian Cyber Incident Response Centre)... these are the guys responsible for managing cyber threats against critical infrastructure in Canada – and I've worked with them before on similar disclosures (like IN11-003) ... the process is known as “Responsible Disclosure” and gives the government and the vendors the opportunity to address the problem and make the information public once they have done so. Its generally considered impolite to talk about security vulnerabilities before they have been addressed because they can be used by malicious persons before the systems are corrected.

I never heard back from CCIRC, except for a single 'ack'[nowledged] email confirming receipt. I assumed they were still working on the problem – and perhaps they still are today. Fast forward a few months and I'm discussing online security with a local group of individuals and I bring up the Halifax Election as an example of a system I have concerns with. I don't tell anyone what the specific security issues are, and so after that, a local journalist Rob Wipond comes up and asks me for more detail, essentially, for proof – to which I say “I cant tell you that, ask the government and pointed him towards CCIRC” ... little did I know he would, and did.

May arrives and the CCIRC has apparently filled an ATIP request made by Rob Wipond and he sends the result to me for comment. It's mostly redacted, but it does show that they took the issues seriously and contacted the municipality and vendor to get the issues addressed. It says they mitigated some concerns, but not specifically which ones or what the fixes were. The redactions were unsurprising as the information had not otherwise been made public at this time and many of the concerns would have been hard to resolve. We're not talking about a quick software fix, but rather, altering voting instructions and redesigning how the system is implemented.

Rob apparently put together a video http://robwipond.com/archives/1257 and asked the vendor and the municipality for comment. I didn't think much of it, Rob hadn't discovered the details of the security vulnerabilities and was reporting about redacted documents and questionable audits. I'd never shared the vulnerability data with Rob, so he had very little to work with.

Nevertheless the story gets picked up by CBC radio and I hear Rob talking about the issue. You can listen to that here. http://www.cbc.ca/informationmorningns/2013/06/17/security-worries-over-website-used-during-halifaxs-last-election/ ... but he's still not got the details, and so I decide to let the story continue along without my input – what can I add if I cant talk about the vulnerabilities. 

Then, everything changes.  The next day CBC has the HRM clerk and the vendor on air to respond to the concerns. http://www.cbc.ca/informationmorningns/2013/06/18/responding-to-e-voting-concerns/ ... I was shocked. During the interview the lady from HRM discloses that we're talking about a “Strip Attack”. She reassures the public in no uncertain terms that when asked “Was the election spoofed” she says “Absolutely not.”... I was floored. Not only can they not know this, but they disclosed the type of security vulnerability in play. Then the vendor goes on about things that have nothing to do with these types of attacks like immutable logs and receipts. They call the whole thing hypothetical, never pointing out that its illegal to hack into a live voting system, so no one could give them proof even if they wanted to.

So now that the cat is out of the bag on the “Strip Attack” portion, I can talk about that part of the disclosure. Those in the know call this ex-post discussion. There remains 2 of the 3 areas of concern that are still secret though, and so I wont be talking about those items. 

I've re-scanned the CCIRC disclosure document to remove the redactions around the now-publicly known stripping attack. You can download that document here [PDF].

My final conclusion in the disclosure was:

“The election process in use may present a number of security and privacy challenges that electors may not be sufficiently aware of when deciding to cast their votes online. These vulnerabilities and lack of auditability may affect the perceived validity of the election result for those that did not use the online mechanisms to vote. The online election may need to be suspended in order to address these and other issues not here disclosed.”

I also make clear that “This can be achieved at scale sufficient to draw into question the election result and is difficult, if not impossible to detect as there are limitless network perspectives that could be attacked.”

I was also concerned to hear that they think these types of attacks are hard, and require considerable cost and effort. The reality of course, is that like any computer vulnerability, there are those who discover and publish these techniques (hard) and those who simply use them. (easy) We call the latter “script kiddies” and yes, you can think of them as they are brilliantly portrayed in this Rick Mercer skit. https://www.youtube.com/watch?v=bmZazJR8ues

In this case a SSL stripping attack could have been achievable with a piece of off-the-shelf software called SSLstrip. Its not hard to use, and doesn't require any considerable effort to install. It can be setup at practically any point between the voter and the voting server and could compromise the confidentiality of the voting information. The problem lies in the voting instructions – when users type in “vote.halifax.ca” their browser translates that into http://vote.halifax.ca and not https://vote.halifax.ca ... since theres no SSL at the start, the attacker simply makes sure it stays that way. Everything else looks identical to the user, save for a missing s in the url bar and a lock icon that never shows up. But there was also a third party domain in use at the time I did the research – the voter got redirected to a site called securevote.ca whom was previously unknown to the voter. The attacker could simply redirect that user to securedvote.ca, a site with SSL setup, https in the url bar and the lock icon lit -- they clone the website and drop the votes on the floor, collect credentials, etc. You'd have to be really on your game to know that securevote.ca is not the same as securedvote.ca in your url bar, given you'd never heard of either of them before you visited the website. These type of plain stripping and hybrid stripping/phishing attacks are ridiculously common on the internet today, and are not difficult to achieve and are particularly difficult, if not impossible to detect as no altered traffic ever hits the official servers.

To actively modify the information in transit – like to flip votes, an attacker would use this tool along with a simple shell script to modify parts of the communication between the voter and the voting servers. Contrary to assertions that you'd have to recreate an entirely new voting app, you rather only have to change a few lines in the in-transit data. At most its a few hundred lines of script -- its the kind of thing the smart kid in your high-school computer lab can do. If its done aptly, the voting servers see the users original IP address and a legitimate SSL connection – SSL is only stripped from the voters perspective, and not the servers. In general (and I've not researched this particular solution) ... receipts and voter codes wont save the process as the attacker can see the codes, hide the receipt entirely, hand out receipts for other legit votes (receipt multiplication), or simply include more form fields on the webpage that ask the voter for more information, like their name and address.

To the incredulous question of 'why' anyone would go to that minimal effort, well, all I can say is – we are still talking about a public election? Right? A recount is impossible, and a city council is the prize.

To the other concerns, well, ask CCIRC or HRM if they're willing to make those public too.

ALPR and Digital Civil Rights

Once again my fight for digital civil rights has landed on the front page of the Times Colonist, this time in relation to the ALPR (Automatic License Plate Recognition) surveillance system. I highly recommend reading the commissioner's report, which you can find at http://oipc.bc.ca/orders/investigation_reports/InvestigationReportF12-04.pdf

The report goes into great detail about the ALPR program and is derived from a lot of information that our research group has not been able to obtain under the freedom of information access processes -- this despite repeated requests for all documents of all types relating to the ALPR program. (Rob Wipond reports that he currently has 6 complaints before the federal information commissioner)

The report has learned that non-hit data (data including the movement patterns of innocent Canadians) is being acquired and shared outside of the BC jurisdiction. It also makes crystal clear that where local police collect the information, they are in custody of the information and are subject to FIPPA regulations on their handling of that data. This includes not storing and not sharing any data that is, after-scanning and comparison to an on-board hotlist, no longer useful for policing purposes.

The Commissioner's report also reveals a new data point which we were unable to access. Obsolete Hits. These are hits that are valid at some point in the database, but that are no longer valid when the vehicle is scanned. The commissioner's report suggests that these false-hits cannot be shared with the RCMP either. This requirement alone is a huge win for accountability of this program, as it will mandate the review of each and every hit produced by the ALPR system before it is shared with the RCMP or used for secondary purposes. This should return the ALPR system to being a useful convenience tool for police plate scanning, but will remove the dragnet surveillance capability of the system as it will likely necessitate manual review of the data produced.

That said, I was disappointed that the commissioner did not engage in an analysis of the confidence rating of the system as a whole. With accuracy rates claimed in the 70-95% range for ALPR systems more generally, they have the potential to generate tremendous amounts of false, incorrect, information that will be used against people. The commissioner's report gives us two data points that are hugely valuable in this regard however. For every 100 scans, only 1 is a hit. In a 95% accurate scanning system, 5 scans in 100 will be inaccurate. The report also states that 4% of hits are obsolete hits, further reducing the confidence rating of the resulting data produced. A Bayes Theorem Analysis of the overall system's data confidence rating is definitely needed, but will require significant resources and access to do properly. The initial data however, suggests that the system may produce significant volumes of incorrect data, and confidence ratings may be low enough to call into question the entire program, even when only discussing the hit-data context. Certainly ALPR's use as an evidence-generating tool for court-purposes will be easily challenged and investigations that start from ALPR data may be subject to the poisonous tree doctrine in certain jurisdictions.

Overall, I'm thrilled with the report. It validates what I and my colleagues have been saying about police use of surveillance tools and is an incredible study into what these programs actually look like in practice. The data in the other-pointer-vehicle category, as just one example, shows just how broadly these programs are being applied. It also draws into question many previous statements by the authorities on the scope of the ALPR program.

I look forward to Victoria Police and the province fulfilling the report's recommendations on disclosure and access to information. Sunlight is always the best disinfectant.

Responsible Disclosure and the Academy.

They say publish or perish, but what happens when publishing puts people's personal information at risk?

Last year I discovered that SSL as it is implemented for server-to-server data transfers in popular software has common, and extremely serious problems. Merchant API's from PayPal, Moneris, Google and others left people's credit card details at risk for interception. I discovered that oauth libraries listed by Twitter were vulnerable -- and found a bunch of other problems that I'm still in disclosure with. It was a big deal, I'd found what computer security folks call a 'class break'.

So I begun a responsible disclosure process. Initiallly I had discovered a vulnerability in a single merchant provider's api and as a good security researcher does, I contacted the vendor -- this quickly went sideways. As a result, I enlisted the help of a local academic and trusted resource of mine, Christopher Parsons. Parsons recommended I get in contact with Tamir Israel, staff lawyer for CIPPIC, and thus begun what was to be (and still is) a nightmarish saga of responsible disclosure and broken cyber security programs.

I've not written this openly about peerjacking before today because this issue is still very much in play, but I feel that because Dan Boneh and his colleagues have published the research that I must respond and set the record straight.

Tamir offered to help me free-of-charge and really went to bat for me, offering legal advice and offering to help with the predicament I was quickly getting myself into. Thanks to Chris and Tamir, we got in contact with the Office of the Privacy Commissioner of Canada to disclose the vulnerability and see if it was within their area of responsibility. After a couple phone meetings, it was determined that because there was no evidence of actual data being stolen via this vulnerability, and with me being unable to provide such evidence without engaging in illegal computer hacking, it was determined it was outside their jurisdiction. The privacy commissioner's process, it seems, is about cleaning up data spills, not preventing them. The file was referred to the CCIRC... The Canadian Cyber Incident Response Centre.

I had already emailed CCIRC, but had received no reply. This is a small agency and while they tell me they that they had received my disclosure, its not clear what they had done prior to the privacy commissioner's office becoming involved. The group seems to work silently behind the scenes -- if there was action, I didn't know about it.... But once the privacy commissioner had referred the file, things changed. Tamir and I began what turned into a weekly discussion with CCIRC about the vulnerability -- it culminated in the Cyber Security Information Notice IN11-003. It was new ground for CCIRC (disclosing new zero-day's is not something they had much, if any, experience with) ... I wanted the agency to name names, but they decided against it. The notice contains only a technical description of the vulnerability, but no context for who is at risk. Mom and Pop shops using PayPal, Moneris or Google would likely never see this notice, nor, if they did, would they understand that their business and customers were at risk. Throughout the back n forth, I was rather miffed about the process, but Tamir rightly pointed out that the disclosure was helpful in that it provided a third party look at the vulnerability and would help confirm what I was saying.

I couldn't live with not disclosing the affected software though, and throughout the process I had found a number of other vendors that were affected -- I'd even found Google Code Search and GitHub Search terms that pointed to a -much larger issue- and found other non-php software that was affected... I had disclosed the affected software that I knew about and the code searches to the CCIRC during the development of the information notice, but they hadn't named names. I'm sure they had their reasons, but I had the public to think about. How were mom and pop merchants going to know to patch? What about their customers credit card details and order information?

I decided to take the list of companies I knew had been notified, and who had had time to resolve the issue, and to disclose about the vulnerability in their software. That resulted in unrest.ca/peerjacking ... but that isn't even close to a complete list, and CCIRC is to my knowledge still working on the issue of the GitHub and Google Code Searches. For my part, after the disclosure I continued to work with PayPal and others to get them patched up. PayPal is still working on this issue in their API's and towards merchant disclosure -- which makes the Boneh et al paper so troubling. I never spelled out how the other PayPal API's were affected because the issue is still being resolved... responsibly.

So that brings me to today. Dan Boneh (a well known academic cryptographer) and a number of colleagues have published "The most dangerous code in the world: validating SSL certificates in non-browser software" ... I had been given a headsup in August about the paper coming out and I reached out to Dan....

[Normally I wouldn't share publicly an email I sent, but since I never got a reply, well, its just my own writing....]

--

Sent on 08/15/2012

Hi Dan,

I was linked this morning to
http://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html by
<redacted> this morning. Just wanted to let you know that this appears to
be exactly like my prior published work on peerjacking,
(unrest.ca/peerjacking) and that informs Government of Canada, CCIRC
cybersecurity notice IN11-003.  
http://www.publicsafety.gc.ca/prg/em/ccirc/2011/in11-003-eng.aspx

I am still in responsible disclosure with a number of vendors on this,
including undisclosed vulnerabilities in other PayPal sdks, and non-php
related vulnerabilities, to which I believe you may be making reference.
I have been withholding a full whitepaper on this research while the
industry addresses and resolves these problems. This issue affects a
wide swath of the credit card processing industry and represents a
threat to critical infrastructure. I would appreciate your team not
disclosing the vulnerabilities in specific software before all
responsible disclosure avenues have been properly exhausted.

I am working with CCIRC and others to responsibly disclose this
vulnerability.

--

Kevin McArthur

I never got a reply, and I even blogged about it before it was released, but today I've noticed that the paper has been made available in PDF to the public. I've read it, and I'm cited in it, but I've made it clear previously that peerjacking affects more than PHP applications and that I've purposefully not disclosed the details of the research due to the ongoing responsible disclosure process. To see this research published within the academy and in the face of my plea to respect the responsible disclosure process troubles me.  However, please know that I tried to make sure this vulnerability was resolved before it was widely disclosed.

Welcome to the world of peerjacking. SSL is dead.

Understanding the Lawful Access Decryption Requirement

Christopher Parsons (@caparsons) and I have just published "Understanding the Lawful Access Decryption Requirement" via SSRN

In it we discuss the implications of bill C-30 on encrypted communications, especially those communications featuring systems that feature perfect forward secrecy or client software involved in key generation.

Certificate Pinning in PHP

UPDATED: The previous version of this script is affected by a security vulnerability in some PHP versions. Details are located at: https://www.sektioneins.de/advisories/advisory-012013-php-openssl_x509_parse-memory-corruption-vulnerability.html. I have updated the script below to include a version check, and if you are using this technique/code I strongly suggest you add similar version checks.

I spent a few hours today working on a piece of open source technology that should help address some of the peerjacking concerns with PHP developers. This tool will help me secure our ecommerce clients, as well as help improve the SSL ecosystem within PHP itself.

So without further introduction, certificate pinning in PHP. A reference design called sslurp (written by Evan Coury (@evandotpro)) and a pin.php pinning script. This builds on the work by Adam Langley and Moxie Marlinspike who did the heavy lifting in developing the pinning algorithm and have also published pinning scripts (written in GO and Python respectively). My PHP contribution to the pinning ecosystem makes it possible to set and verify pinned certs within PHP projects.


#!/usr/bin/php
<?php

/*
Copyright (c) 2012, StormTide Digital Studios Inc.
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:

    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.

    * Redistributions in binary form must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in
      the documentation and/or other materials provided with the
      distribution.

   *  Neither the name of StormTide Digital Studios Inc, nor the names 
      of its contributors may be used to endorse or promote products 
      derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 
SUCH DAMAGE.
*/

//Version 0.2 Alpha

//Check for vulnerable openssl_x509_parse call.

if(version_compare(PHP_VERSION'5.3.0''<')) {
  
//PHP Less than 5.3
  
die("PHP Version Insufficient. See https://www.sektioneins.de/advisories/advisory-012013-php-openssl_x509_parse-memory-corruption-vulnerability.html" PHP_EOL);
}
if(
version_compare(PHP_VERSION,'5.3.0','>=') && version_compare(PHP_VERSION,'5.3.28','<')) {
  
//If less than 5.3.28 in the 5.3 series.
  
die("PHP Version Insufficient. See https://www.sektioneins.de/advisories/advisory-012013-php-openssl_x509_parse-memory-corruption-vulnerability.html"PHP_EOL);
}
if(
version_compare(PHP_VERSION,'5.4.0','>=') && version_compare(PHP_VERSION,'5.4.23','<')) {
  
//If less than 5.4.23 in the 5.4 series.
  
die("PHP Version Insufficient. See https://www.sektioneins.de/advisories/advisory-012013-php-openssl_x509_parse-memory-corruption-vulnerability.html"PHP_EOL);
}
if(
version_compare(PHP_VERSION,'5.5.0','>=') && version_compare(PHP_VERSION,'5.5.7','<')) {
  
//If less than 5.5.7 in the 5.5 series.
  
die("PHP Version Insufficient. See https://www.sektioneins.de/advisories/advisory-012013-php-openssl_x509_parse-memory-corruption-vulnerability.html"PHP_EOL);
}

if(isset($_SERVER['argv']) && is_array($_SERVER['argv']) && !empty($_SERVER['argv'][1])) {
  
$cert $_SERVER['argv'][1];
} else {
  die(
"Error no cert provided. Usage ./pin.php cert.crt"PHP_EOL);
}

if(is_file($cert) && is_readable($cert)) {
  try {
   
$contents file_get_contents($cert);
   
$parsed openssl_x509_parse($contents);
   
$pubkey openssl_get_publickey($contents);
   
$pubkeydetails openssl_pkey_get_details($pubkey);
   
$pubkeypem $pubkeydetails['key'];
   
//Convert PEM to DER before SHA1'ing
   
$start '-----BEGIN PUBLIC KEY-----';
   
$end '-----END PUBLIC KEY-----';
   
$pemtrim substr($pubkeypem, (strpos($pubkeypem$start)+strlen($start)), (strlen($pubkeypem) - strpos($pubkeypem$end))*(-1));
   
$der base64_decode($pemtrim);
   
//Calculate the SHA1 pin of the Public Key in DER format.
   
$pin sha1($der);
   
$base64 base64_encode(sha1($der,true));
   echo 
'Pin for: '$parsed['name']. PHP_EOL;
   echo 
'Pin Hex (SHA1-long): '$pinPHP_EOL;
   echo 
'Pin Base64 (SHA1-short): '$base64PHP_EOL;
   echo 
'Pin Chrome format: sha1/'$base64PHP_EOL;
 } catch (
Exception $e) {
   die(
'Could not get public key pin');
 }
} else {
  die(
'Could not read cert');
}


This cli script will generate pins for SSL certificates and the reference design by Evan Coury for validating a pinned connection can be viewed at https://github.com/EvanDotPro/Sslurp/blob/master/src/Sslurp/RootCaBundleBuilder.php#L130

Additionally the sslurp package will, in the future, likely form the basis for bootstrapping a CA bundle into PHP projects requiring secure operation.

 

#PeerJacking - SSL Ecosystem Attacks Against Online Commerce (update)

I just wanted to put up a short update about my PeerJacking research today. The vulnerability I discovered with SSL certificate validation in online commerce applications in July of last year is still in responsible disclosure with a number of software vendors, and I am awaiting the successful mitigation of the vulnerability before releasing the whitepaper describing the scope and impact of these certificate validation failures.

I've been informed by one of these vendors that there is strikingly similar research to be presented at the ACM CCS 2012 conference. I am confused by this as I have not been contacted by these researchers. The researchers published an abstract that revealed a number of the companies affected by PeerJacking, including several that I am currently working with on mitigation of this critical vulnerability. I've attempted to contact the academics involved, but have not seen any reply yet, though, the abstract in question appears to have been removed from the web for the time being. 

PeerJacking (programming failure to verify peer certificates) affects nearly all major programming languages and is depressingly common. It is deployed widely throughout commercial server-to-server API's from hundreds of distinct vendors. It is the breadth of the vulnerability and the sheer number of affected vendors that makes responsible disclosure of this vulnerability so complicated and why 13 months later I am still largely limited in what I can say publicly about this issue.

As before, please contact the Canadian Cyber Incident Response Centre for advice. They are the agency handling the responsible disclosure, notification and mitigation of this vulnerability affecting critical infrastructure.

See also (previous posts on this topic):
PeerJacking - SSL Ecosystem Attacks Against Online Commerce
Credit Card System Update (IN11-003)
Credit Card System Vulnerability
Update on Credit Card System Vulnerability.

Is there a safe way to upgrade users to https?

Update: Ryan's Responded with Whats your organizations policy on SSL? which covers these items in detail.

This morning GlobalSign CTO Ryan Hurst put up a simple post: "Rewriting HTTP URLS to HTTPs URLs in Apache"

It's the advice I hear all the time from security-minded developers, but I think its wrong, and its wrong because of a flaw in the way the web works. That said, I've used the technique from time to time, and it works, a bit.

So what does it do. If you ask for http://www.example.org you get http, a 302 redirect and a reference to the https version of the url. Your client then begins a HTTPS request. Sounds good right? Sorta.

The initial request is sent over http and the client is not expecting a SSL result. They're not checking for a cert, or a lock icon because they didnt ask for a secure connection. The developer thinks its secure because he sees his users interacting over SSL, but theres a problem. That initial redirect is the weak link.

Lets talk the typical Mallory between Alice and Bob problem. Bob wants to talk to Alice. Bob sends Alice a message and waits for a reply. Alice not wanting to talk insecurely tells Bob to go to a secure site at a specific url. Bob dutiffully follows the advice and goes to the secure site. The only problem is that Bob cant tell the difference between Alice and Mallory at this referral stage. Did Alice really tell Bob to go to the secure site, or did Mallory? In this case Mallory can redirect Bob to an insecure site of Mallory's chosing. Its up to Bob to check that the referred site's identity validates and is secure, but Bob as you'll recall wasnt expecting secure messaging, and is used to Alice telling him to goto the secure site when she wants to talk securely, so isnt being paranoid about the authenticity of the referral. Bob has done this 100 times without issue. But on time 101, Mallory hijacks the referal, and now Bob is talking to Mallory who is relaying messages to Alice and pretending to be Bob talking securely to Alice. The protection SSL was supposed to bring never happened and no one was the wiser. Alice sees a secure connection to who she thinks is Bob and Bob sees no security to Alice as he expected. Everyone is happy, but Mallory is listening in.

Moxie Marlinspike demonstrated this attack with a talk and tool called SSLStrip. Yes, thats right, its now off the shelf software.

So is it safe to upgrade users this way? Is there any value in the activity? Not really. In one sense, it changes the threat model from passive listening to active attacking, and so offers -some- security, but it also has a downside, it trains the users not to ask for the https page. The sum of the security change, about zero.

So what can make it better? HSTS. Http Strict Transport Security. This technology works such that after the first time you see a https site, if you type http in or visit a http link, it will automatically upgrade you. Its a good technology, with privacy problems (see hstscookie.ca), but it also has some odd side-effects when combined with Ryan's suggestion of a site-wide redirect, in that, it will actually hide mixed content errors on your pages for some clients. For a demonstration of this, see this page.

So as a person or company interested in real https security, what do you do.

Three things.

1) Implement HSTS

2) Get listed in HTTPSeverywhere.

3) Pin your certs in browsers that allow this (See Google Chrome)

#PeerJacking - SSL Ecosystem Attacks Against Online Commerce

Responsibly Disclosed to Canadian Cyber Incident Response Centre [CCIRC], Office of the Privacy Commissioner of Canada and Canadian Bankers Association, July 15, 2011. Informs government Public Safety Notice IN11-003 http://www.publicsafety.gc.ca/prg/em/ccirc/2011/in11-003-eng.aspx released December 20, 2011. Due to the scope of the issue, vendor notification was performed by CCIRC.

Users of the following libraries should evaluate their software for exposure to IN11-003 (#PeerJacking). Many of these libraries are now patched by the vendors but affected versions will need to be deployed on end-user web servers.

Moneris eSelectPlus 2.03 PHP API
PayPal SDK Soap (MD5: ae8b2b7775e57f305ded00cae27aea10)
PayPal SDK NVP (MD5: 5a5d6696434536e8891ee70d33b551bd)
PayPal WPS ToolKit (MD5: a9e7c4b8055ac07bb3e048eecc3edb14)
Authorize.net Library (* Defaults to secure, but affected by configuration instructions. anet_php_sdk-1.1.6)
Google Checkout Sample Code (V 1.3.1 for PHP) (Article Updated April 2, Patched in V.1.3.2 Download Here)
OSCommerce 2.3.1
CiviCRM 4.0.5 (Update Apr 2: Still vulnerable as of V 4.1.1)
PrestaShop 1.4.4.0
Magento 1.5 (Update Apr 2: Vulnerabilities still exist as of version 1.6.2)
UberCart for Drupal (uberdrupal-6.x-1.0-alpha8-core)
Pear Services Twitter. (0.6.3)
Themattharris Oauth (< 0.61) (*Twitter indexed library https://dev.twitter.com/docs/twitter-libraries#php )
TwitterOAuth (File date: May 18, 2011, *Twitter indexed library https://dev.twitter.com/docs/twitter-libraries#php)

Additionally the following GitHub Search may help identify affected libraries. Here. Instances of CURLOPT_SSL_VERIFYPEER set to false or 0, and instances of CURLOPT_SSL_VERIFYHOST set to 0, 1, or true rather than the value 2, may indicate exposure. PHP ships with secure defaults for these values and thus this is not a vulnerability in PHP or CURL, but entirely contained within library code.

Libraries where these default values are overridden and not correctly set will be vulnerable to man-in-the-middle interception and modification of data in transit by an attacker using a self-signed SSL certificate and off-the-shelf software. Fixes to these libraries usually cannot be deployed centrally by the vendors, and typically must be upgraded individually on all deployed client systems.

Please contact the Canadian Cyber Incident Response Centre for further mitigation information and advice. Thanks to Tamir Israel (CIPPIC) and Christopher Parsons for their assistance in responsibly disclosing this vulnerability.

Pages

Subscribe to unrest.ca RSS