The long-term solutions City Council doesnt want to hear on Homelessness

With the Times Colonist reporting that City Council doesn't want to hear about long term solutions to homelessness at their upcoming town hall I thought I'd publish some comment I had been preparing; They're rough, not all workable, but hopefully helps folks to understand some of the issues we have to discuss if we're going to solve this crisis. Repeated bouts of criminalization and "quick fixes" are doomed to failure and waste both the City and advocate's resources on court challenges and needless harassment of our citizens.


In the spirit of truth to power; 


Initiatives to improve market affordability by creating market supply. 

  1. End the CRD urban containment boundary.
  2. Eliminate DCC's that are not directly attributable to a project.

  3. Eliminate new development/business parking requirements.

  4. Eliminate the extortive phrase 'amenity package' from council vocabulary.

  5. Reduce the number of zoning types from 628 zones to a handful representing residential, urban, commercial, and industrial zones.

  6. Eliminate zoning variance and spot zoning practices.

  7. Reduce the tax mill rate on residential units with assessments less than 1 million.

  8. Enact use-it-or-lose-it bylaws that require an occupation or active development permit or face speculation taxes.

  9. Have council set development policy, have staff enforce approval/rejections. End public hearings and council involvement in the approval of every shed built in the city. Return to a concept of strong and well-defined property rights and allow civil courts, not council, to deal with nimby/banana related disputes.

  10. Pay developers a bonus for every unit they create equal to 10% of the new taxes that will be generated for 10 years. (Incentives for creating newly taxable value)

  11. Improve transit options to allow for car-free living and eliminate parking costs.


Initiatives to deal with core causes of homelessness. 

  1. Safe consumption/injection site paired with well-funded rehabilitation programs.

  2. Self-exclusion programs for liquor retailers.

  3. Institutional care options for the most severe mental health issues when related to repeated criminal convictions.

  4. CrASBO (Criminal Related Anti-Social Behaviour Orders) framework for repeat-offender cases of theft, vandalism, intoxication in public, etc. End the revolving door cycle of arrest, release and re-offending for minor crimes.

  5. Greater funding for fiscal self-sufficiency programs. (Education in money management)

  6. Work-Ready programs to ensure everyone has valid identification, a social insurance number, up-to-date tax filings, bank account access, and access to clean clothing and personal hygiene services. Assist with filing bankruptcy and achieving a 'clean-slate' where applicable.

  7. Casual/At-will labour opportunities within the city; work opportunities based on a single days' effort or based upon a unit of production. Can do better than collecting refundable cans. Seek claw-back waivers from welfare programs to allow retention of benefits while doing a minor level of qualified casual work.

  8. Public outreach programs to tell citizens of Victoria not to give to panhandlers, and highlight better donation opportunities for the same charitable $, food banks, our place, etc.

  9. Adopt a case-management approach to each person experiencing homelessness -- tailor personalized solutions and interventions appropriate to each individual. Start with chronic criminal reoffenders.


Initiatives to deal with youth and young-adult homelessness.

  1. Fund more young-adult care options. Too many at-risk children age-out-of-care and are thrown to the streets with no safety net or supports.

  2. Better funding for social work programs for in-home interventions to deal with parental abuse, mental health and addictions issues.

  3. Positive youth opportunities for casual community contribution and paid work.

  4. Create stable and appropriate market housing opportunities for families.

  5. Deal with sources of societal and family marginalization including supports for LGBTIQ youth.

  6. Fund more anti-bullying/harassment programs and support systems for victims of this behaviour.

  7. Better Integrate community policing and restorative justice programs in a way that allows youth to see policing and social workers as friendly and in-partnership rather than always in a disciplinary/negative interaction setting.

  8. Provide better self-learning opportunities for literacy, numeracy, and computer skills. Provide a free and self-paced path to cognitive employment and a dogwood diploma.

  9. Provide free pathways to pardon services for the rehabilitated. Allow young-adults to escape the stigma of their past actions and achieve a 'clean slate' upon which to build.

  10. Ensure there are market housing options that are affordable (30% = $680/mo) of a median individual salary. ($27,200/yr @ 2013)


Initiatives to deal with the symptoms of homelessness

  1. Chattels protection. (Lockers placed throughout the city with time-release locks where homeless can place belongings for a period of time, and with a disclaimed expectation of privacy enforced by user-agreement such that police can search as appropriate)

  2. Post-office box services where homeless can receive mail. An address is core to receiving many government and employment services.

  3. Basic tenting platforms with bike/chattels lockers in city parks experiencing camping. Usage by permit available at homeless shelters and needs tested. Can revoke permits for those who offend the social order (eg public-view drug use, chattels not within tent or locker, etc). To reduce the security/neighbourhood impact, a maximum of 6 individual platforms/acre should be targeted. Tents must be taken down during the day, but may be stored in park lockers. Tents (see red-tent campaign design) would be supplied with the permit and only the approved tents may be used on the platforms.

  4. A public washroom station (including a shower), sharps container and emergency call station placed in every city park.

These are just a few additional ideas, and aren't intended as a replacement for the valuable and needed contributions of affordable & project housing, homelessness supports and councilling services provided by dozens of organizations throughout our town. 

Updated 3: Yubico reinvents the Yubikey

Last January I did a comprehensive review of the Yubikey for a client and published the results to this blog. My overall review was disappointment and a non-recommendation for the technology. I still hold that opinion when it comes to Yubico’s OTP technology, but it appears they took my review (and similar ones throughout the industry) to heart and have spent some considerable time and effort reinventing the technology. I’m pleased to have the opportunity to review the new technology, for the same client, and with the same end-goal in mind — replacing a installed-cert PKCS12/CA PKI solution with a more easily administered solution that maintains the same (or ideally better) security level.  

Yubico has launched 3 new products, the NEO the NEO-N and the U2F Security Key. You’re probably asking yourself ‘but there was already a NEO?’, and in some ways you’re right — the early NEO’s shipped under the same name but were a very different product. I panned them in my previous review for trying to make the OpenGPG application do things it was never meant to do, for having user replaceable applications and for generally having poor administrative tools. The good news is that those concerns have been addressed with the new NEO and it truly is an entirely new device worthy of a second look. 

The new keys are all based on similar technology (the NXP A700x MCU) and have 3 distinct modes of operation. The U2F key being limited to U2F mode, while the NEO and NEO-n can operate in OTP, CCID or U2F modes. I’ll get into U2F shortly, but of more immediate interest is the CCID mode. They’ve fixed the user-management of the card applications, and now ship fully configured and with static configuration. The end user cannot upgrade the apps, but neither can a hacker replace them with malicious versions either. Its a solid security move and one they should be commended for.

In CCID mode the keys implement a new applet, now supporting the PIV-II standard, which is an appropriate and ideal choice for identity and authentication. With PIV-II supported, the keys can interact with pretty much any standard PKCS11 supporting software, from OpenSSH to Firefox. PIV works with proper x509 certificates, CSRs and the like, and do so in a way that is both predictable and stable. The keys still support OpenGPG, but, critically only try to do so for use with OpenGPG and not other authentication models to which it is not suited. This is as it should be and one can tell that they’ve considerably upped their game with the PIV approach.

I’ve done a thorough test and review on the PIV applet, and while there were some challenges with building, and using, the key personalization tools, these challenges did not considerably interfere with my ability to setup the key. In this regard, better end-user testing on multiple platforms and better user documentation would go a long way, but a determined sysadmin is going to work through them as I did. Once the keys are personalized, they behave as expected and actually, in my testing, flawlessly with the OpenSC 0.13.0’s PKCS11 interface. CCID smartcards are a hard technology to get right, and at least in this respect the Yubikey NEO now appears to be leading the pack.

From the perspective of a user of a personalized key, the only barrier to entry is the installation of OpenSC and the configuration of the PKCS11 modules — which may be advanced topics for some end-users, but ones that largely go with the CCID territory. In testing I was able to get SSH key authentication, with an onboard-generated RSA2048 key, working as intended with against an Ubuntu 14.04 server.

Overall if you are in the market for a CCID smartcard, I would recommend the new Yubikey NEO and NEO-n devices.

Moving on however, its generally recognized that CCID technologies are not going to be accessible to the truly end-users who really only want to use a smart card for web browser based authentication actions and expect a driverless plug-and-play experience.

Enter U2F and the FIDO alliance.


The FIDO alliance is a massive collaboration project of industry giants on what 2nd factor authentication should look like on the web — and its quite good and very ambitious. The technology involves a bidirectional authentication architecture, borrowing heavily from site pinning technologies and certificate authentication solutions. Interestingly, it has succeeded in simplifying the user-experience for certificate enrolment over traditional pkcs11 solutions, and as such is likely useable-enough for mass adoption.

From a security standpoint, the FIDO U2F standard is mostly solid, but has some caveats that can weaken or even defeat the security architecture provided. I have great reservations as to the pluggable design of the technology, and fear that cheap tokens and software-emulation will weaken the security level desired by site operators. The technology gives the user considerable rope with which to hang themselves — and thats an architecture decision I question in a security product. Fortunately these issues don’t apply directly to the Yubikey’s, but rather to the ecosystem and server middleware around them.

The only hard security concern I have with U2F relates to how keys are stored on the token devices. U2F specifies that each site will receive a unique key and that keys will not be reused between sites. This is a great design feature and ensures both privacy and security. It however requires a fairly large amount of on-device memory to store what could be hundreds of security keys. To work around the secure storage requirements, the spec also allows for tokens to use a wrapped-key pattern, where these U2F private keys are stored, encrypted, but outside of the secure element — with only the device master encryption key being stored securely in the element. The idea being that so long as the master key is secure, the encrypted-but-stored-in-insecure-memory keys should be too. I fundamentally disagree with this design decision, as it allows for token cloning by breaking the encryption scheme or by factoring one single key. This element alone likely makes the technology unsuitable to applications requiring high-level FIPS certifications, and their use in the management of classified or sensitive information. That said, the spec allows for tokens that do not wrap keys to exist, allowing end-users to choose more secure tokens from the market. (though, none exist yet!). The Yubikey NEO is a device with very limited amounts of secured memory (There is ~80Kb of eeprom in the NXP 700x) and therefore I posit that it is implementing a wrapped-key pattern. (Update 2: Yubico has released some detail on their wrapping (actually a deriviation) mechanism. Details at No technical description confirming this is available from Yubico at this time). Because of the use of wrapped keys, it is my opinion that these devices should not be recommended for high-security use where device cloning is a concern. Update 2: The Yubikey's are implementing a remotely-referenced derived-key pattern. The security of the scheme is relying primarily on HMAC-SHA256 pre-image resistance. This allows the Yubikey to derive a key based on a publicly shared combination of nonce and application id, by muxing in a master key.

The above has a caveat that the U2F specification does include a counter that should prevent long-term dual use of a cloned token. However, in testing I have determined that the Yubikey U2F developer reference application does not correctly handle the counter feature and only treats a bad counter as an authentication failure, rather than an event that locks out the token. This allows for an attacker to find the valid counter by starting at the cloned value and iterating upwards until a login is allowed. The user of the real token would see one auth failure, but likely try again and succeed. This is a serious security failure in the reference implementation. I fear, that relying on the server implementers to correctly handle complicated security protocol elements like this counter will lead to failures — we’ve seen it time and time again from verifypeer to bad random sources. Basically, if theres a way for a developer to screw it up, they will. Interestingly, I also found issues with the way randomness was being generated in the reference application, however, it was promptly fixed with a bug report and it was not likely to affect most implementations.

The specification, and technology, requires a significant level of expertise for relying parties to implement the server side correctly and this implementation should not be taken lightly. I would recommend anyone implementing U2F engage a reputable security firm and perform hostile testing against the service. Aggressive logging of certain authentication failures is an absolute must.

UPDATE 1: Oct 30, 2014

Another serious security bug in the U2F reference application was found by another developer. It's recorded as Issue 6 and the reporter 'araab' points out that the U2F challenge response mechanism is not properly being validated. This again points to the fragility of the implementation side of the U2F equation and how its so hard to program cryto software securely.

Beta Software

Moving beyond the specification, we get into the implementation software running the U2F experience for the end-user. Currently only Google Chrome is supported and only with an extension. In this authors testing the product launched in a horribly broken state, with the browser reliably and predictably crashing. I have verified the result on a number of machines and the reviews of the plugin also report similar issues. This is truly disappointing and the device should never have launched with such glaring problems in the accompanying software stack. That Google also launched Security Key is truly surprising and I am taken somewhat aback that this got past Google’s technical review. This issue affects Chrome 38 on OSX 10.10 and would prevent the deployment of this technology until a fix is deployed.

After complaining on twitter, I received some helpful advice — to try the beta channel of Chrome. So I did, and I’m happy to report that Chrome 39 has resolved the main crashing problem. However, problems remain with regards to multiple tokens inserted at the same time.

The issue of multiple tokens might not seem like a big deal, but if one is to use a CCID mode for SSH authentication token (say a permanently connected NEO-n) and then try to make use of U2F tokens as well, the technology will break down currently.

 It is also not possible to concurrently use U2F and CCID mode from a single token. (Updated 3: This is now fixed with Yubico NEO Manager 1.0 [Released Nov 19, 2014], multi-mode use is now enabled and works as expected)

The NEO tokens are also slightly hard to configure. You need to download an application, referenced by link in a PDF document to configure the mode of the token to U2F from the default of OTP. For this reason, I would strongly recommend that anyone implementing U2F specifically target the blue U2F version of the device, which is already placed in U2F mode and is ready for registration upon first insert without further user configuration. 

Here the documentation about how to get started with the token could be significantly improved. 

The Hardware

The hardware feels surprising resilient, with the NEO-n being nearly indestructible. The NEO key format fits nicely into a USB slot on a MacBook Air, but could be snugger in the slot, as the key flops around as you try to press the button.

I believe the keys will stand up to reasonable wear and tear and are suitable for purpose.

Supply Chain

In the last review I noted that the Yubikeys were being shipped from a residential address and gave me pause with regards to the chain of custody. I'm happy to report that my new keys arrived having been shipped from a commercial address in Palo Alto. 


Overall I will likely recommend the Yubikey U2F solution to my client, but conditionally on the market adoption of Chrome 39+, and on the resolution of the multi-token problems. This likely means a pause in adoption for now, but recommending future support in the product map, and switching the functionality on at some future date when the browser experience stabilizes. 

I would however not recommend the Yubikeys to anyone requiring high-level of security that would include cloned devices in their threat model. This non-recommendation is due to the wrapping (Update 2: Derived based on HMAC-SHA256 and its pre-image resistance properties) and storage of keys (Update 2: Nonce+Mac) outside the secure element and problems with the counter-verification model. (Update 2: I'm also looking at how the keys are handling counters internally, but have not yet completed my analysis) ... While I am not aware of any cloning attacks on these devices, the architecture of the key leads me to believe they are possible, and that such attacks may not be reliably detected due to defects in server implementation models. I would recommend Yubico develop a security checklist for implementers that clearly specifies what to do with a counter-conflict and other authentication failures. I would also recommend that they opensource their key-wrapping technology so as to invite peer review of the approach.

Cloudflare, Keyless SSL and the selling of 'secure'.

Cloudflare is an amazing content-delivery-network (CDN) platform that powers a lot of the web. They make the web faster and more stable by distributing content around the globe so that it is nearer visitors, blocking abusive users and offering a school-of-fish approach to web security. They also have a unique reputation in the security field, both for making predictions about security that turned out to be wrong, and for having the openness and transparency to make their assertions publicly and to admit mistakes. Their Heartbleed challenge and their general approach to security helped the community take a theoretical problem in TLS (the technology that tries to secure the web) and prove that it was an actual issue deserving of immediate mitigation and attention.

Yesterday, Cloudflare announced a new product offering called Keyless SSL. They held the technical details back from the press until today. Now with the full technical detail release, I write this post as my excitement for a claimed innovation turns again into the disappointment of overhyped marketing and a product that doesn't deliver on the promise of secure end-to-end computing in the cloud.

The marketing material boasts that you can have "secure" TLS operation without divulging your Private Keys. The key claim made in their blog post yesterday ( ) was:

"The lock appeared and the connection was secured, end-to-end." [emphasis mine]

There's only one problem -- its simply not accurate, and the connection is not secure end-to-end. That is, the connection isn't secured between the user and the organization they're doing business with. The reality is that the connection is being intercepted, in the middle, by a third party and with all the inherent security implications native to MITM proxies.

To their credit Cloudflare has put up a great technical description post of what they have built and you'll find the technical blog post at

Here's where the claims of security and end-to-end encryption break down: the Session Keys. In the Keyless SSL architecture, the Session Key is shared with Cloudflare. Whether its the confidentiality of user input (think accounts and passwords), the content of the website (think bank balances), the security policy of web scripting (think same origin), persistent storage access (think cookies), and so on and so forth -- its the Session Keys that form the foundation of the web's security and confidentiality model. Without confidentiality of the Session Keys, you don't have any security.

With the Session Key you have a read/write ability to view and modify any of the data flowing between the user and the organization that they are doing business with. This gives rise to a host of legal problems that come from the /ability/ to break TLS connections, but the short of it is, if you can read and write the confidential data in the middle of the connection (like at Cloudflare), then the session is no longer end-to-end secure and at some point, if you're big enough, you should probably expect a government will come knocking (or even hacking).

If you're an organization and you willingly give those Session Keys to a third party, you're just deputizing them for your entire online business. The product is anything but "Keyless" and involves significant confidential data disclosure to Cloudflare. While the service is running (read, the normal state of things) they have all the same authorities and problems that you do as it relates to the session security, and the problems of maintaining the confidentiality of your user information.

So lets go back to banking. Banking's hugely complicated but one of the key obstacles to their adoption of cloud technology is that there are strong privacy protection rules that govern banking confidentiality. This prohibits a third party from receiving confidential information about account holders -- and it makes it impossible to share a Private Key or break encryption to allow for the use of a CDN service. Such prohibitions apply despite the benefits to the bank of using a CDN service, such as improved fault tolerance and service resiliency. Kicking the proverbial can down the road from the SSL certificate Private Key to the Session Key, doesn't change anything as it relates to the confidentiality of banking information -- Like seeing the users account username, password, or account balances. In both public and Session Key scenario's Cloudflare's service can read your client's information -- (compelled, accidentally or hackerishly) -- and thats the rub, its just not end-to-end secure.

Law Enforcement and Political Risk

There's also the issue of the Law Enforcement Agencies (LEAs) using the Cloudflare service to break into user accounts. In effect, a Cloudflare-like system expands the range of actors that can be forced to disclose sensitive key material to government agencies. This includes ongoing and logged access to the generated Session Keys and TLS parameters, which could be compelled by government order.

We all know about the cheeky spy-agency 'SSL added and removed here' slide, but even outside the clandestine, we have witnessed the willingness of American authorities to compel American companies to disclose this type of information before. The case in point is Lavabit. In the Lavabit case the company, which was strongly marketing privacy and security as key service selectors, was forced by legal process to provide cryptographic material to facilitate the installation of interception devices against the entire Lavabit service. The technical design of the system did not protect against such a brazen, service-wide approach to user-account access by law enforcement. Thankfully Ladar Levinson, after a long legal fight, shuttered the Lavabit service as a result of the political damage.

This is all to say, that technical security to guard against oneself as a bad actor is extremely important. It takes more than rhetoric and the hope of doing good to build truly secure services that are immune from the political risks we saw in Lavabit. If you want to known more about the Lavabit nightmare, Moxie Marlinspike has an absolute must-read on the subject.

The lesson here is that it is important that Cloudflare and other services trying to implement TLS services truly understand the scope of the political risk they are creating when they start managing keys, even Session Keys, on behalf of others.

They must be up-front with partners about these very real risks.

Practical Abilities - Private Keys vs Session Keys

The only significant difference between the Session Key and the Private Key in a TLS setup is that the Private Key comes first in the sequence. The end goal of the handshake is to derive a shared Session Key, and its this Session Key that provides the abilities and confidentialities expected of TLS. The TLS Handshake/Private Key process can be thought of abstractly like a Session Key minting machine, but its these Session Keys that actually work the locks. Once you have the Session Key for a connection, the Private Key is no longer relevant for that connection.

With access to a service's Session Keys you can:

  • - Intercept confidential user information. (usernames, passwords)
  • - View and change the content of user web pages. (MITM attack, see bank balances, etc)
  • - Identify and isolate specific website users. (User profiling)
  • - Be subject to government requests for data interception, manipulation.
  • - Suffer a security breach the same as a bank could.
  • - Access user information like cookies and browser storage.
  • - Script against the user's browser as the domain.
  • - Cryptographically impersonate the service and authenticate with an end user.
  • - (Future? Bind the service to a specific TLS key.)
  • - Operate an app firewall/CDN service and inject captcha's and similar into secure sessions

Only one very small part of the TLS operation is performed by the certificate Private Key in question, and its none of the things we really care about -- like the ability to maintain communications confidentiality and respond to law enforcement as a first party. Really, the only significant new ability the Keyless solution offers over a shared Private Key solution is the ability to turn off the creation of new session keys; like in the case of Cloudflare's service becoming compromised, its a better kill-switch. If thats the only contingency you're planning for, Keyless is right for you.

If thats not your contingency, and you have practical issues with third party data access; like ones of legal, policy and malicious attackers -- well then, its like Moxie described of Lavabit:

"Unfortunately, their primary security claim wasn't actually true."

To translate to Cloudflare Keyless SSL, I'll posit:

Unfortunately, their end-to-end security claim wasn't actually true.

Reinventing the wheel

The really bad news though is that, what they "invented" using raspberry pi's and fabled stories of skunkworks development was already largely found in commercial off the shelf products known as networked hardware-security-modules (Net-HSMs).

Thales, Amazon, among others make network HSMs -- You put your Private Keys in them, they stay in the datacenter, and then you point a webserver (or group of webservers like cloudflare's CDN) at the them using an OpenSSL engine (among other methods). The HSM handles the Private Keys, signs the secrets and, in effect, provides a similar kind of service as what Cloudflare is doing with their signing daemon. The webserver just offloads the cryptographic signing operations to the HSM via OpenSSL directly.

So really, there's nothing all that new to see here, networked HSM's have been around for a long time, and they do practically the same job, all-be-it at a high cost. However, due to the problems inherent in trusting keys (even Session Keys or oracles) to third parties, they have never really been popular for third party access or use. They're primarily used within an organization as a defense-in-depth technique to limit the damage caused by network intruders.

New security risks

Contrary to claims made about the service providing equivalent to on-premise security for SSL keys, I find there are a number of entirely new risks presented by the model.  Just a few of the newly expanded security vulnerabilities include:

- Trusted Cloudflare staff compromising the service. (Third party employee risk)
- Government agencies "hunting sysadmins" at Cloudflare. (LEAs, NSA, CSEC, GCHQ, etc)
- Hacker risk both at Cloudflare and with oracle access control. (unauthorized use)
- Technical downtime risk. (more points of failure)
- Oracle attack risk.
-- This is where the Oracle is tricked into revealing something about its internal keys. The post covers a couple of these known attacks, padding oracles and timing attacks, but at least for the latter it doesn't solve them, it just pushes responsibility off to the OpenSSL library that, while the best we have, suffers regularly from new attack vectors and zero-day vulnerabilities. I would be curious to see how the signing oracle design will stand up over time to the increasing sophistication of adaptive chosen plaintext/ciphertext attacks, among others.
- Confused deputy risk. (attacking the oracle to sign malicious data)
--This is where something goes wrong at the service and you sign bad data anyway. For example, the signing of specially crafted data. There are a number of TLS security concerns regarding the signing of crafted data, and this presents an entirely new risk when a third party is involved and able to get you to blindly sign data.

and the list goes on and on. In the world of TLS security, extra eyes and extra legal organizations (possibly in different jurisdictions) with access to data creates massive risk. Oracles that blindly sign or decrypt arbitrary cryptographic data are generally frowned upon as an architecture, as are deputized daemons that cannot verify the source of their inputs; as a result I have to question the idea that the the Cloudflare Keyless SSL solution provides the same level security as on-premise keys.

So with the Keyless SSL architecture, in my opinion, thoroughly debunked, we're still left with the fairly 'classic' E2E problem; how can we leverage the cloud's benefits while maintaining the confidentiality of the communications?

Cracking that E2E security nut to work with intermediaries will be one of the key research projects of our generation. Its one of the reasons I was so excited when Cloudflare announced they'd managed to do an E2E-secure CDN and optimistic that if anyone could solve it, it was them.

Sadly, with the announcement of the Keyless SSL technical details, it seems that future is still over the horizon.

As always, I welcome a response from the vendor and will happily update this post to include their response if provided.

Victoria Amalgamation - Grasshoppers and Ants.

Amalgamation is back in the news today, the polls look supportive, but is data, ignorant of the financial consequences useful or actionable?

I’m a data guy, and when it was suggested that Victoria put a question to voters: “Are you in favour of reducing the number of municipalities in Greater Victoria through amalgamation?”  I thought about the issue and realized I had no information on which to base that decision.

Amalgamation is a super complex subject — Victoria has 13 regional governments plus the CRD, a lot of redundancy and as evidenced by the Sewage issue, problems making decisions that don’t boil down to not-in-my-backyard. Amalgamation could be hugely helpful here, and would tend to bend my thinking to the YES side of the question, but then there’s this nagging question in my head. WHAT WILL IT COST?

I sought to answer that question. I pulled in all the favours from all the data agencies I could think of. Apologies to the Data BC team, and Citizen Services as I requested and FOI’d data from the province — which it turns out they don’t even track. I had a simple question to solve:

What is the financial position of each of the 13 municipalities in Greater Victoria?

Turns out, no one, not even the province can answer that question - and I have the negative FOI response to prove it.

All municipalities are required to submit an annual report to the province, and that includes data about debt, reserves, income, etc… it even has some data on non-financial assets (those things like sewers and roads that municipalities are principally responsible for) … but, and here’s the rub, the data is historical — how much they spent, and how much the spend depreciated. An old city like Victoria with its aging infrastructure looks a lot smaller than it is on paper because so much of the infrastructure was installed in the early 1900’s. So wheres the financial position really sitting? Is the value of a city its assets as classically understood, are its liabilities really just financial instruments, spends and depreciation — or is the liability really the fact that the city has to maintain its infrastructure service level? We cant turn off sewers, water mains, or stop maintaining bridges and roads.

There’s a surprising lack of sophistication in tracking that liability, and it has led to a phenomenon known as ‘borrowing from the pipes’ wherein a municipality defers critical maintenance to pay for politically popular amenities. Its certainly hit the Greater Victoria region, and hit it hard over a number of councils and is generations away from being fixed — this is a long term problem that requires long-term solutions.

I set out to answer that question though, what is the financial position of the 13 regional municipalities? — and I’ve started to get answers, but only via very time consuming FOI requests. No one has studied this, and the poll-accessible public has no idea what this amalgamation thing will cost them.

Amalgamation supporters suggest that studies come after the question — but for me the question is unanswerable. I would support amalgamation if it were a philosophy, but not if my Dads household (Saanich residents) taxes go up, while services go down to cover off municipalities that have failed to maintain their assets.

Aesop’s Fable of the Grasshopper and the Ant come into play here. While the ant dutifully toils all summer to put away food for winter, the grasshoppers just sung and played. When winter came, the grasshoppers were banging at the ants door for food only to be given a hard lesson in planning for the future. This region's municipalities range from ants to grasshoppers.

Not one to take no for an answer, I have begun to FOI the region's municipalities for data on this ‘borrowing from the pipes’ question. City of Victoria was the only municipality to proactively publish the information on their website — and I now have 3 other FOI responses, from Saanich, Esquimalt and Oak Bay.

The data’s up at Google Docs forgive the formatting as its a transitory dataset and will be cleaned up when I’m done.

Here’s the 30 second version; You take the book value from the annual report (what the municipality considers the financial value of the assets in the ground) and divide it by what it will cost to replace when its useful life is up (the replacement cost figure)…. This gives you a ratio — lets call it the McArthur Infrastructure Ratio. This isn’t a perfect measure, and there’s been lots of problems pointed out with it (inflation and appreciation of fixed assets being the biggest issues)… but we can factor out most of these as they are comparable between cities. On a per-city basis the ratio isnt particularly informative, but when compared to its neighbours, it tells a story.

So far the ratios in Victoria break down like this;

City - McArthur Infrastructure Ratio (Book Value : Replacement Cost) [Future Liability R-B]

Saanich - 39% ( $758,105,520 : $1,946,400,000 ) [$1,188,294,480]
Esquimalt - 34% ( $77,312,184 : $219,560,000 ) [$142,247,816]
Victoria - 18% ( $342,756,413 : $1,708,000,000 ) [$1,365,243,587]
Oak Bay - 10% ( $49,548,291 : $485,039,900 ) [$435,491,609]

I’m working on getting all 13 prior to november, but FOI is a slow process.

What this tells us is that Saanich is full of Ants — prudently paying for their infrastructure as it ages and deferring amenities until they can be afforded. Oak Bay, not so much. Lots of happy singing and heel chirping coming from that region. Victoria sits in the middle. Most importantly, there are billions in future infrastructure liability for our next generation.

So with that in mind, what does the Amalgamation question look like? Well, it looks like Saanich residents are going to get a pretty raw deal — they bring over double to the table when compared with Victoria on a financial basis, 4x as much as Oak Bay. They will certainly lose in an amalgamated structure. Oak Bay on the other hand, would do very well financially — and it is this fact that, that I believe is so strongly driving this agenda in the wealthier circles of town.

All of this is to say, its way too early to ask the question: “Are you in favour of reducing the number of municipalities in Greater Victoria through amalgamation?” …. I would instead ask “Do you support committing funding to study the issue of municipal amalgamation?”. That would be the democratic question. Asking for an opinion of an ignorant public is little more than ‘distraction’ and the result isnt useful. Sadly, making a case based on data doesn’t seem to be on the agenda for the pro-amalgamation lobby — and we saw that again today with this poll.

Metadata, privacy, access and the public service.

On May 15, 2014 the OIPC (office of the information and privacy commissioner) released order F14-13 [pdf] denying a Section 43 application (to disregard a FOI request).  Being the data/privacy policy wonk that I am, I tend to read all the orders put out by the OIPC — there’s usually something interesting. This one was really interesting.

Someone had filed a request for the metadata associated with government emails — that is, who emails whom, and when — but excluding the content of those emails. The Open Data community has long mused about filing such a request, as it could be the single most important dataset for understanding how our government works, however, it was always considered extremely audacious to file as the public service was sure to have a strong reaction to an unprecedented level of analysis of their communications. On May 15, I had no idea it had been filed, or that there was even a case before the commissioner.

So, upon seeing the OIPC ruling, I filed an FOI request with Citizens Services (now denied) for the Section 43 application and the supporting documentation, that resulted in the order. I was hoping to learn why the province felt it should ignore this request, and under what justification. I also contacted the privacy commissioner’s office to see if there was any way to become an intervenor on the file and provide an amicus-type opinion for the commissioners consideration.

Through the opendatabc mailing list, I posted the story, and Paul Ramsey came forward and shared that it was his request. For those who don’t know, Paul is a brilliant data geek, having helped build the PostgreSQL database software that powers much of the internet — if anyone has the ability to work with this information, it is he.

Moving ahead 30 days later, I have my FOI answer — records prepared for an officer of the legislature (ie the OIPC) are outside the scope of FOIPPA and my request for the Section 43 application and documentation was denied outright by Citizens Services. The OIPC process wasn’t fruitful either, as the Section 43 matter had already been ruled on and they weren’t sure the file was going to come back to them — so no avenue for comment there. (I’m now told, via Paul, that the request has been denied again subsequent to the Section 43 ruling and has gone back to the commissioner for another round. I’m still hoping to be able to provide comments.)

This issue might be the single most controversial FOI request filed in BC history — and it will set a lasting and groundbreaking precedent. At question is whether the public service is accountable to the public in its metadata records. The public interest in the metadata cannot be understated, nor can the complexity of the access rights in question.

As a comparative however, CSEC, Canada’s signals intelligence agency spends obscene amounts of money analyzing the metadata of foreign governments — under the guise of increasing Canadian economic advantage. Will the FOI legislation, allowing citizens to oversee our own government, be given the same funding and economic priority as say, CSEC spying on Brazil’s government?

A core question is that of whether it is ‘just metadata’? — privacy commissioners have disagreed citing privacy implications, spy agencies have argued its no big deal arguing it has different privacy expectations over say a telephone wiretap, but — and here’s the crucial part — when it comes to transparency of the public service, where there are explicitly waived privacy expectations found in email policy documents and a crucial right of public access, what will the balance be for public service metadata?

In my opinion, this could be the single most valuable dataset ever released under FOI and this request will likely define public sector metadata policy for generations to come.

It is crucial that we get it right.

The state of Open Data 2014

I was reading an old blog post I wrote in 2011 about the state of Open Data in BC ( ) and thought I’d pen another annual update. I should do this every year, but sadly, I’ve not had the time to really blog lately.

In 2011, I highlighted rights and privacy clearing, cost recovery, scaling and license diversity as major failures and opportunities for course correction in the emerging open data field — and I’m sorry to say, many of these problems materialized.

But we’ve also had a lot of successes since 2011 — the Open Data Society of BC (disclosure: I’m a member of the board of directors) has held two highly successful Open Data Summits that have convened hundreds of people from across Canada and even the world to talk about Open Data. My favourite memories of these events were the edgy talks, like Pete Forde’s entitled ‘People are dying to get the data’ ( ) because they really bridge a gap between the civil service and the data activism that is occurring all over the web today. These events help bring together people who would otherwise never meet, and invite them to learn from each other.

The Data BC group of the provincial government has been doing a great job with what limited resources they have — in the last couple year’s they’ve facilitated the publishing of unprecedented transparency/accountability information in the form of fiscal data, the personal information directory and geographic data that has been hugely helpful to a number of stakeholders.  They’ve done considerable work on licensing and on trying to source data — even where it doesn’t exist. I’ve come to like and respect the work they’ve done for BC in a challenging environment.

But there’s a problem in the foundation of this group as well — they don’t have a budget to replace funding for datasets that are currently being charged for (the cost recovery problem), they don’t have the statutory ability to command data release from other ministries, and they don’t have the resources needed to implement the commitments made in the G8 Open Data Charter — especially the transformative commitment to an ‘Open By Default Policy’. This fix will have to come from cabinet, take the form of significant budget increases and involve the creation of a legislative framework. Moreover, the architecture of data release will have to change — a central team fetching data for a portal wont scale. Data release has to be federated within each ministry, and just as each ministry has an officer responsible for handling FOI requests, so too should they have one to handle data requests. Its 2014, its time to make data exchange as seamless and as common as email in the public sector.

The lawyers are also hurting the economics of open data — while much progress has been achieved on licensing, there are still very real debates about conformance to the Open Definition and serious problems with the legal disclaimers of liability for intellectual property and privacy rights clearing. It is my belief that these issues are hurting commercial investment in Open Data.

Across the country, other groups are also making positive progress — the Federal Government included a large funding commitment for Open Data in their 2014 budget, they’re hosting hackathons (which they misspell as appathon [because hackers = bad of course]) and their MP Tony Clement is taking every opportunity to talk about the benefits of open data and the future promise of a more transparent public service. Major wins with digitally filing access-to-information requests, and citizen consultation exists in this area. The publication of valuable datasets like the Canada Land Inventory and Border Wait Times are also impressive.

There too, there are big failures. Canada Post is suing innovators over their use of Postal Codes ( CBC Story ) and DFO’s hydrographic data remains closed and mostly collecting dust. The government seems to be ignoring responsibility for Canada Post’s behaviour, but most will point out that they have jurisdiction over the Canada Post Corporation Act and could make a simple and common sense legislative change to resolve this embarrassment to our federal open data commitments.

We’re making progress municipally — the City of Vancouver has made amazing strides in digital archiving, making digitized archives available on the local intranet in a unique and groundbreaking way that deals with intellectual property concerns. The City of Victoria has embraced open data, they launched VicMap (making their cadastral data open), began webcasting council meetings and published an open data portal. They even hosted a hackathon with Mayor Dean Fortin and Councillor Marianne Alto helping the Mozilla Webmaker folks teach children about digital literacy and creating the web [ link ]. The City of Nanaimo continues to lead the pack with realtime feeds, bid opportunities, maps of city owned fibre resources, digitally accessible FOI processes and so much more.

In the private sector and ngo space there are so many notable projects — the GSK backed Open Source Malaria project being my favourite. There are also successes like Luke Closs' and David Eaves’ in the civic app space.

The hacker space is also seeing some success with proof-of-concept prototype applications developed by citizens at hackathons going on to inspire civil servants to create their own versions and publish them widely. The BC Health Service Locator App and the Victoria Police Department App both get credit for listening to citizen input.  Other apps have been created and have seen little to no uptake, like those developed to help citizens better understand freshwater fishing regulations (mycatch), or storefront display apps to help the homeless find shelters with available space (VanShelter). The next steps here are clearly to create bidirectional projects that allow both civil servants and citizens to work collaboratively on applications together using the open source methodology. (Who wants to be the first to get the Queen in right of British Columbia into using GitHub?)… Other projects have failed to find traction due to lack of, or bad quality data. My site is failing, due to unreliable WMS-only access to data from NASA which is down more often than it is up… the lesson here, online services are no replacement for downloadable primary source data. (My house of commons video service) is in its 7th year of operation, and continues to prove that even simple prototype apps can be useful and long-lived, drive policy change (House of Commons Guidelines) and find feature uptake (Hansard now has video clips embedded). Hopefully same-day recording download, clipping and linking will be added to ParlVU and this app will no longer be useful.

For the coming year the Open Data Society of BC is crowdsourcing its agenda and I’d encourage you to participate in that discussion and to join or support the society.  via OpenDataBC-Society

I know I missed some people and agencies who are doing great things, so please leave comments if I missed you. (tweet me @kevinsmcarthur for an account as I dont monitor this site’s admin pages often)

(Updated) Evaluating the security of the YubiKey

The folks over at Yubico have responded to this article, and I'm happy to post their letter. It gives a little additional context to the issues I presented and a critical 'other-side' response. I'm happy to see the company actively engaging and addressing the issues really quickly.  There's a couple bits that need clarification. For example the nonces I point out are actually used in other places than inside the synclib, and the 'facilities' issues re the las vegas 'shipping center' were purposefully left vague of the full detail to avoid exposing what appears to be a residential address.

-- Yubico Letter --

At Yubico, we enjoy reading articles from security experts such as yourself, and we appreciate the visibility you provided Yubico through your detailed evaluation of our Yubikeys. Our security team at Yubico takes your assessment very seriously, but there are some clarifications and intricacies that we wanted to share with you that we’re confident will convince you that the Yubikeys offer the highest grade of enterprise security in a comparative product class. Please feel free to contact us if you have any further questions/comments…

The Protocol.

- The Yubikey personalization app saves a .csv logfile with the programmed key values meaning a malware-based attack may discover the log files on block devices even when the files have been deleted

In the most popular scenario, customers choose to use YubiKeys configured by Yubico, where the cryptographic secrets are generated using the YubiHSM’s TRNG programmed into YubiKeys in Sweden the UK or US at our programming centers that use air-gapped computing [at least 1 air gap between the programming station with its control computer and any network]. The plain text secrets database generated on the YubiHSM is encrypted by the customer’s validated Public PGP key, signed by the programming station’s control computer’s secret key, the plain text file is zapped in place and the secrets securely deleted from disk and memory on the programming station. At Yubico, we call this the “Trust No One Model”!

The Yubico personalization app provides customers the flexibility to program their own keys at their convenience. Yubico does acknowledge that customers programming their own keys may not be aware of this risk that the AES keys are in the .csv file and we are working to change the default behavior and provide additional warnings to inform users of the potential risks.  

- Replay prevention and api message authentication is implemented on the server receiving the otp — this has resulted in a number of authentication attacks like which are now corrected in later versions of the protocol. The design, however, trusts the server with the authentication security and thus presents a problematic architecture for shops that do not have crypto-aware developers and system admins able to verify the security setup is working as intended

As you have astutely observed, we’ve fixed the issue you’ve seen in later versions of the protocol. For customers who don’t have adequate crypto-aware developers and system admins, to secure authentication servers should work with solutions from our trusted partners.

- The replay prevention is based heavily on the server tracking the session and use counters and comparing them to a last seen nonce. It also depends on application admins comparing public identities. This should ensure that the keys cannot be intercepted and replayed. Some servers do not properly validate the nonce, the hmac or properly protect their databases from synchronization attacks. Some implementations do not even match the token identities and accept a status=ok as a valid password for any account. (try a coworkers yubikey on your account!). The weak link in the Yubikey-with-custom-key scheme seems to be the server-application layer.
- The Yubikey protocol when validating against Yubico’s authentication servers suffers from credential reuse. It is vulnerable to a hostile server that collects a valid otp and uses it to login to another service, and its vulnerable to hacking or maliciousness of the authentication severs themselves. You are delegating the access yes/no to Yubico under the cloud scheme.

Customers who are concerned about using the YubiCloud infrastructure for YubiKey OTP validation should consider implementing their own authentications and validation servers. Yubico provides all the necessary server components as free and open source code. Customers may also chose to configure and use the YubiKey with own OATH based authentication servers. 

The code.

- The yubikey-val server contained a CURLOPT_SSL_VERIFYPEER = false security vulnerability in the synchronization mechanism.

We have fixed this issue about certificate validation when using https for sync between validation servers. We do however want to point out that the vulnerability had only limited repercussions. This case is closed on GitHub,

- The nonce and other yk_ values however were not and could be modified by a MITM attack. The attack presents a DoS attack against the tokens (by incrementing the counters beyond the tokens) and possibly a replay issue against the nonces. — however replay concerns require further study and I have not confirmed any exploitable vulnerability.

The nonce used in the ykval protocol from the php client is predictable but it is unclear if this is an issue. We will be doing further review to address any possible exploit vectors if they exist. The case is still open

- There were also instances of predictable nonces. The php code ‘nonce’=>md5(uniqid(rand()))) is used in several key places. This method will not produce a suitable nonce for cryptographic non-prediction use.

The server_nonce field is only used inside the synclib code to keep track of entries in the queue table so we deem this as acceptable. We provided further explanation on GitHub

- The php-yubico api contains a dangerous configuration option, httpsverify which if enabled will disable SSL validation within the verification method.  Again defence-in-depth approach protects the transaction, with the messages being hmac’d under a shared key, and mitigating this as a practicable attack.

We are working to resolve this issue that was highlighted in order to provide the defense-in-depth protection to eliminate the possibility to turn off https certificate validation in the php client. This case is still open,

- The C code within the personalization library contains a fallback to time-based random salting when better random sources are not available [ ] however, I cannot figure a time when a *nix-based system would not have one of the random sources it would try before falling back to time salts.

We made the relevant changes to the code and addressed this issue about salting when deriving a key in the CLI personalization tool on windows. Thank you for pointing this out.

The Neo

- As a result, for both personalization issues and third party software, these modes aren’t useful in a real-world deployment scenario, but may be useful to developer users. That said, other smartcard solutions support RSA signing in a smarter way than layering the OpenPGP application on a JavaCard, so are likely a developer’s choice over the YubiKey Neo in CCID mode.

We don't quite agree your analysis of our NEO product and want to point out that there is a distinction between YubiKeys use for development work versus production roll-outs. We intentionally allow users to re-configure the keys and this allows for possible attack vectors but this does not mean the product or protocol is insecure. In a production roll-out, most of our customers choose to protect their Yubikeys with a configuration password.

Therefore, for the NEO, we allow development users to turn on and off the CCID functionality as they choose. In a production roll-out, this function is pre-configured and protected. In addition, It is not obvious to us what makes the NEO in CCID mode less usable than any other CCID + Javacard smartcard product implementation. We have also introduced a PIV applet that allow for all flavors of PKI signing including RSA.

Logistical Security

We respect the due diligence done by you to find out about our locations through public information available on the web. However, although Yubico resides in a shared facility which houses other popular internet companies, we have a dedicated office which is accessible by authorized Yubico employees only. Our current focus is on delivering products with the highest level of quality and security, and the big corporate office will come soon ☺.

Just-In-Time Programming & Efficient Packaging offers Yubico a competitive advantage:
Because of the size of the YubiKey and our unique packing technology, 1 operator and the Yubico Just-In-Time Programming Station can program over 1,000,000 YubiKeys per month.  YubiKeys are handled in slabs of 10 trays of 50 YubiKeys [500 YubiKeys in total] with 4 slabs per 10kg box [2000 YubiKeys].  A pallet of 100,000 YubiKeys weighs less than 500kg. Therefore, Yubico logistics and programming can be performed in facilities that are not available to other authentication hardware companies. Our logistics and programming team have all been with Yubico for more than 5 years and are among our most loyal and trusted employees. We pay particular attention to the security of our programming centers, and update our processes consistent with the advice of our market renounced security experts.  

Thank you!


-- Original Article Below --

Every so often I get to take a look at a new security device with the hopes of replacing our existing PKI-based systems, which while very secure are an administrative nightmare and dont lend themselves to roaming profiles very well. This time it’s the Yubikey Nano and Yubikey Neo devices from Yubico that I’m evaluating.

All Yubikey devices support a custom OTP generator based on a home-rolled OTP implementation. There’s a pretty good lack of formal proof on this scheme, but the security boils down to an AES 128 shared secret design, which is reasonably secure against in-transit interception and key leakage. This appears to be the primary threat model the Yubikey is trying to protect against and may be its most secure property.

The Protocol.

The protocol appears to have seen limited review against privileged attacks, and presents a number of security concerns, including:

- The ability to reprogram the devices to a chosen key before account association due to default configurations being shipped both programmed and unlocked. Users are warned against reprogramming their devices as they will lose the API validation ability, however an upload mechanism exists to restore it. Check to see if your key starts vv rather than cc when using the Yubico auth servers as this may indicate reprogramming.

- The Yubikey personalization app saves a .csv logfile with the programmed key values, meaning a malware-based attack may discover the log files on block devices even when the files have been deleted. Needless to say, with the AES keys from the CSV, the security of the scheme fails.

- Replay prevention and api message authentication is implemented on the server receiving the otp — this has resulted in a number of authentication attacks like which are now corrected in later versions of the protocol. The design, however, trusts the server with the authentication security and thus presents a problematic architecture for shops that do not have crypto-aware developers and system admins able to verify the security setup is working as intended.

- The replay prevention is based heavily on the server tracking the session and use counters and comparing them to a last seen nonce. It also depends on application admins comparing public identities. This should ensure that the keys cannot be intercepted and replayed. Some servers do not properly validate the nonce, the hmac or properly protect their databases from synchronization attacks. Some implementations do not even match the token identities and accept a status=ok as a valid password for any account. (try a coworkers yubikey on your account!). The weak link in the Yubikey-with-custom-key scheme seems to be the server-application layer.

- The Yubikey protocol when validating against Yubico’s authentication servers suffers from credential reuse. It is vulnerable to a hostile server that collects a valid otp and uses it to login to another service, and its vulnerable to hacking or maliciousness of the authentication severs themselves. You are delegating the access yes/no to Yubico under the cloud scheme.

The code.

Yubico has taken a radical transparency approach and published all their source code on…. despite what follows below, this approach should breed confidence in the product over time. When compared to closed-source products, the Yubico product would appear to have the leg up when it comes to identification and correction of security flaws. They are also taking a defence-in-depth approach to API security by signing and checking values even over TLS links. However, while this mitigates a number of coding concerns, it may introduce new concerns and I remain concerned about the use of a hmac signature, over user controllable data, encrypted under TLS as it is a case of a mac-then-encrypt scheme.

I did some basic analysis of the PHP and C code, and found a number of concerning items. The yubikey-val server contained a CURLOPT_SSL_VERIFYPEER = false security vulnerability in the synchronization mechanism. Thankfully the developers had taken a defence-in-depth approach to the API and session and use counter were restricted from being decremented. The nonce and other yk_ values however were not and could be modified by a MITM attack. The attack presents a DoS attack against the tokens (by incrementing the counters beyond the tokens) and possibly a replay issue against the nonces. — however replay concerns require further study and I have not confirmed any exploitable vulnerability.

There were also instances of predictable nonces. The php code ‘nonce’=>md5(uniqid(rand()))) is used in several key places. This method will not produce a suitable nonce for cryptographic non-prediction use.

The php-yubico api contains a dangerous configuration option, httpsverify which if enabled will disable SSL validation within the verification method.  Again defence-in-depth approach protects the transaction, with the messages being hmac’d under a shared key, and mitigating this as a practicable attack.

The C code within the personalization library contains a fallback to time-based random salting when better random sources are not available [ ] however, I cannot figure a time when a *nix-based system would not have one of the random sources it would try before falling back to time salts.

Logistical Security

I also took the opportunity to look at the security of the Yubico logistics process, and came up with a number of questions, not the least of which was that my Yubikey was apparently shipped from a residential address in Las Vegas Nevada. This gives me pause with regard to Yubico’s claims that the keys are “Manufactured in USA and Sweden with best practice security processes”. I have questions about the chain-of-custody of the keys.

Public sources investigation into the addresses provided on the Yubico site suggest the addresses are shared with other firms and seem to be co-working spaces rather than the typical corporate offices as one would expect of a company providing security products to companies like Facebook and Google.

The Neo

The Yubikey Neo includes a JavaCard based CCID/smartcard device and the ability to support the OpenPGP app. In testing this is clearly beta software and requires kludgey ykpersonalize commands with arguments like -m82. Its obviously not intended for widespread use, and as such it was discounted over more mature ccid products.

Some of the YubiKeys also provide a challenge/response and hotp/totp clients. They’re implemented via middleware that isnt first party and is commercial software on OSX and requires users to learn hotkeys to use.

As a result, for both personalization issues and third party software, these modes aren’t useful in a real-world deployment scenario, but may be useful to developer users. That said, other smartcard solutions support RSA signing in a smarter way than layering the OpenPGP application on a JavaCard, so are likely a developer’s choice over the YubiKey Neo in CCID mode.


In the end, I cant recommend the Yubikey to replace our PKI systems. The Yubico authentication server api represents a serious information leak about when users are authenticating with the service, and puts Yubico in a trusted authenticator position. It’s also the worst form of credential reuse, and a hostile/malware infected server can compromise unused OTPs and use them against other services.

What this means is that when using the default, pre-programmed identity, if someone were to break into any Yubikey-cloud based system we use, they could break into all of our systems, by collecting and replaying otp’s that have not yet authenticated with the cloud. While you’re using application x, its using that credential to log into application y and is pilfering data. Despite telling them not to, most users reuse passwords across services, and so the login credentials from one service will usually work with another, and the attacker has everything needed for successful login. This is in stark contrast to PKI based systems that always implement a challenge/response using public key cryptography. This also applies to any corporation using pre-programmed keys in a single-sign-on type scheme, as any of the hacked application servers can attack other servers within the sso realm.

Yubico’s or our own loss of the API keys (not the token’s aes key) used for server-to-server hmac validation would also silently break the security of the system.

Because of the problems with the authenticator service architecture, we would have to program the keys into a single security zone, significantly weakening our currently partitioned services (currently multiple pki keys can be stored and automatically selected via certificate stores and/or pkcs11). In the best case scenario, this would leave us to program the Yubikeys ourselves and ship them out to end-users, adding significant costs to an already expensive device and weakening our security architecture in the process.

In short, the device’s default configuration is not sufficiently secure for e-commerce administration and the pre-programming of the device is not financially viable due to shipping and administration costs. The sso architecture creates a single security zone across many services and this is not desirable or best practice.

I will continue to seek out and evaluate solutions that offer PKI-level 2nd factor authentication security without the headaches of administering a production PKI.

Setting the record straight on Halifax E-voting.

There is currently a story making the media circuit on electronic voting in the Halifax municipal elections. This is the story of that election and how this information became known, and what remains hidden behind responsible disclosure today.

In September 2012 I learned that Halfiax was going to be using e-voting and that they had been making claims about the security and viability of online voting – and so I reached out to colleagues in the security community to see if anyone had done any security evaluation of this evoting solution. 

The response I got back was that no one had done the research because there were concerns about the climate for this type of research. For example, just watching a voter cast their vote, could be considered an election offence in some jurisdictions. So I decided to do some basic 'right to knock' type research before the election was open rather than investigate during the voting period. I simply checked out the publicly facing voting instructions on the municipal website and visited the website to see what security they were presenting to would-be voters. For example, was it presenting an identity validated EV SSL certificate? I did some other basic security checks that didn’t require anything more than loading the webpage and looking up details in public registries. To my surprise, the voting portal had been setup by the middle of September (presumably for testing), and there were a number of items I found concerning with the implementation I was seeing.

So I wrote it up, and sent it over to CCIRC (The Canadian Cyber Incident Response Centre)... these are the guys responsible for managing cyber threats against critical infrastructure in Canada – and I've worked with them before on similar disclosures (like IN11-003) ... the process is known as “Responsible Disclosure” and gives the government and the vendors the opportunity to address the problem and make the information public once they have done so. Its generally considered impolite to talk about security vulnerabilities before they have been addressed because they can be used by malicious persons before the systems are corrected.

I never heard back from CCIRC, except for a single 'ack'[nowledged] email confirming receipt. I assumed they were still working on the problem – and perhaps they still are today. Fast forward a few months and I'm discussing online security with a local group of individuals and I bring up the Halifax Election as an example of a system I have concerns with. I don't tell anyone what the specific security issues are, and so after that, a local journalist Rob Wipond comes up and asks me for more detail, essentially, for proof – to which I say “I cant tell you that, ask the government and pointed him towards CCIRC” ... little did I know he would, and did.

May arrives and the CCIRC has apparently filled an ATIP request made by Rob Wipond and he sends the result to me for comment. It's mostly redacted, but it does show that they took the issues seriously and contacted the municipality and vendor to get the issues addressed. It says they mitigated some concerns, but not specifically which ones or what the fixes were. The redactions were unsurprising as the information had not otherwise been made public at this time and many of the concerns would have been hard to resolve. We're not talking about a quick software fix, but rather, altering voting instructions and redesigning how the system is implemented.

Rob apparently put together a video and asked the vendor and the municipality for comment. I didn't think much of it, Rob hadn't discovered the details of the security vulnerabilities and was reporting about redacted documents and questionable audits. I'd never shared the vulnerability data with Rob, so he had very little to work with.

Nevertheless the story gets picked up by CBC radio and I hear Rob talking about the issue. You can listen to that here. ... but he's still not got the details, and so I decide to let the story continue along without my input – what can I add if I cant talk about the vulnerabilities. 

Then, everything changes.  The next day CBC has the HRM clerk and the vendor on air to respond to the concerns. ... I was shocked. During the interview the lady from HRM discloses that we're talking about a “Strip Attack”. She reassures the public in no uncertain terms that when asked “Was the election spoofed” she says “Absolutely not.”... I was floored. Not only can they not know this, but they disclosed the type of security vulnerability in play. Then the vendor goes on about things that have nothing to do with these types of attacks like immutable logs and receipts. They call the whole thing hypothetical, never pointing out that its illegal to hack into a live voting system, so no one could give them proof even if they wanted to.

So now that the cat is out of the bag on the “Strip Attack” portion, I can talk about that part of the disclosure. Those in the know call this ex-post discussion. There remains 2 of the 3 areas of concern that are still secret though, and so I wont be talking about those items. 

I've re-scanned the CCIRC disclosure document to remove the redactions around the now-publicly known stripping attack. You can download that document here [PDF].

My final conclusion in the disclosure was:

“The election process in use may present a number of security and privacy challenges that electors may not be sufficiently aware of when deciding to cast their votes online. These vulnerabilities and lack of auditability may affect the perceived validity of the election result for those that did not use the online mechanisms to vote. The online election may need to be suspended in order to address these and other issues not here disclosed.”

I also make clear that “This can be achieved at scale sufficient to draw into question the election result and is difficult, if not impossible to detect as there are limitless network perspectives that could be attacked.”

I was also concerned to hear that they think these types of attacks are hard, and require considerable cost and effort. The reality of course, is that like any computer vulnerability, there are those who discover and publish these techniques (hard) and those who simply use them. (easy) We call the latter “script kiddies” and yes, you can think of them as they are brilliantly portrayed in this Rick Mercer skit.

In this case a SSL stripping attack could have been achievable with a piece of off-the-shelf software called SSLstrip. Its not hard to use, and doesn't require any considerable effort to install. It can be setup at practically any point between the voter and the voting server and could compromise the confidentiality of the voting information. The problem lies in the voting instructions – when users type in “” their browser translates that into and not ... since theres no SSL at the start, the attacker simply makes sure it stays that way. Everything else looks identical to the user, save for a missing s in the url bar and a lock icon that never shows up. But there was also a third party domain in use at the time I did the research – the voter got redirected to a site called whom was previously unknown to the voter. The attacker could simply redirect that user to, a site with SSL setup, https in the url bar and the lock icon lit -- they clone the website and drop the votes on the floor, collect credentials, etc. You'd have to be really on your game to know that is not the same as in your url bar, given you'd never heard of either of them before you visited the website. These type of plain stripping and hybrid stripping/phishing attacks are ridiculously common on the internet today, and are not difficult to achieve and are particularly difficult, if not impossible to detect as no altered traffic ever hits the official servers.

To actively modify the information in transit – like to flip votes, an attacker would use this tool along with a simple shell script to modify parts of the communication between the voter and the voting servers. Contrary to assertions that you'd have to recreate an entirely new voting app, you rather only have to change a few lines in the in-transit data. At most its a few hundred lines of script -- its the kind of thing the smart kid in your high-school computer lab can do. If its done aptly, the voting servers see the users original IP address and a legitimate SSL connection – SSL is only stripped from the voters perspective, and not the servers. In general (and I've not researched this particular solution) ... receipts and voter codes wont save the process as the attacker can see the codes, hide the receipt entirely, hand out receipts for other legit votes (receipt multiplication), or simply include more form fields on the webpage that ask the voter for more information, like their name and address.

To the incredulous question of 'why' anyone would go to that minimal effort, well, all I can say is – we are still talking about a public election? Right? A recount is impossible, and a city council is the prize.

To the other concerns, well, ask CCIRC or HRM if they're willing to make those public too.

ALPR and Digital Civil Rights

Once again my fight for digital civil rights has landed on the front page of the Times Colonist, this time in relation to the ALPR (Automatic License Plate Recognition) surveillance system. I highly recommend reading the commissioner's report, which you can find at

The report goes into great detail about the ALPR program and is derived from a lot of information that our research group has not been able to obtain under the freedom of information access processes -- this despite repeated requests for all documents of all types relating to the ALPR program. (Rob Wipond reports that he currently has 6 complaints before the federal information commissioner)

The report has learned that non-hit data (data including the movement patterns of innocent Canadians) is being acquired and shared outside of the BC jurisdiction. It also makes crystal clear that where local police collect the information, they are in custody of the information and are subject to FIPPA regulations on their handling of that data. This includes not storing and not sharing any data that is, after-scanning and comparison to an on-board hotlist, no longer useful for policing purposes.

The Commissioner's report also reveals a new data point which we were unable to access. Obsolete Hits. These are hits that are valid at some point in the database, but that are no longer valid when the vehicle is scanned. The commissioner's report suggests that these false-hits cannot be shared with the RCMP either. This requirement alone is a huge win for accountability of this program, as it will mandate the review of each and every hit produced by the ALPR system before it is shared with the RCMP or used for secondary purposes. This should return the ALPR system to being a useful convenience tool for police plate scanning, but will remove the dragnet surveillance capability of the system as it will likely necessitate manual review of the data produced.

That said, I was disappointed that the commissioner did not engage in an analysis of the confidence rating of the system as a whole. With accuracy rates claimed in the 70-95% range for ALPR systems more generally, they have the potential to generate tremendous amounts of false, incorrect, information that will be used against people. The commissioner's report gives us two data points that are hugely valuable in this regard however. For every 100 scans, only 1 is a hit. In a 95% accurate scanning system, 5 scans in 100 will be inaccurate. The report also states that 4% of hits are obsolete hits, further reducing the confidence rating of the resulting data produced. A Bayes Theorem Analysis of the overall system's data confidence rating is definitely needed, but will require significant resources and access to do properly. The initial data however, suggests that the system may produce significant volumes of incorrect data, and confidence ratings may be low enough to call into question the entire program, even when only discussing the hit-data context. Certainly ALPR's use as an evidence-generating tool for court-purposes will be easily challenged and investigations that start from ALPR data may be subject to the poisonous tree doctrine in certain jurisdictions.

Overall, I'm thrilled with the report. It validates what I and my colleagues have been saying about police use of surveillance tools and is an incredible study into what these programs actually look like in practice. The data in the other-pointer-vehicle category, as just one example, shows just how broadly these programs are being applied. It also draws into question many previous statements by the authorities on the scope of the ALPR program.

I look forward to Victoria Police and the province fulfilling the report's recommendations on disclosure and access to information. Sunlight is always the best disinfectant.

Responsible Disclosure and the Academy.

They say publish or perish, but what happens when publishing puts people's personal information at risk?

Last year I discovered that SSL as it is implemented for server-to-server data transfers in popular software has common, and extremely serious problems. Merchant API's from PayPal, Moneris, Google and others left people's credit card details at risk for interception. I discovered that oauth libraries listed by Twitter were vulnerable -- and found a bunch of other problems that I'm still in disclosure with. It was a big deal, I'd found what computer security folks call a 'class break'.

So I begun a responsible disclosure process. Initiallly I had discovered a vulnerability in a single merchant provider's api and as a good security researcher does, I contacted the vendor -- this quickly went sideways. As a result, I enlisted the help of a local academic and trusted resource of mine, Christopher Parsons. Parsons recommended I get in contact with Tamir Israel, staff lawyer for CIPPIC, and thus begun what was to be (and still is) a nightmarish saga of responsible disclosure and broken cyber security programs.

I've not written this openly about peerjacking before today because this issue is still very much in play, but I feel that because Dan Boneh and his colleagues have published the research that I must respond and set the record straight.

Tamir offered to help me free-of-charge and really went to bat for me, offering legal advice and offering to help with the predicament I was quickly getting myself into. Thanks to Chris and Tamir, we got in contact with the Office of the Privacy Commissioner of Canada to disclose the vulnerability and see if it was within their area of responsibility. After a couple phone meetings, it was determined that because there was no evidence of actual data being stolen via this vulnerability, and with me being unable to provide such evidence without engaging in illegal computer hacking, it was determined it was outside their jurisdiction. The privacy commissioner's process, it seems, is about cleaning up data spills, not preventing them. The file was referred to the CCIRC... The Canadian Cyber Incident Response Centre.

I had already emailed CCIRC, but had received no reply. This is a small agency and while they tell me they that they had received my disclosure, its not clear what they had done prior to the privacy commissioner's office becoming involved. The group seems to work silently behind the scenes -- if there was action, I didn't know about it.... But once the privacy commissioner had referred the file, things changed. Tamir and I began what turned into a weekly discussion with CCIRC about the vulnerability -- it culminated in the Cyber Security Information Notice IN11-003. It was new ground for CCIRC (disclosing new zero-day's is not something they had much, if any, experience with) ... I wanted the agency to name names, but they decided against it. The notice contains only a technical description of the vulnerability, but no context for who is at risk. Mom and Pop shops using PayPal, Moneris or Google would likely never see this notice, nor, if they did, would they understand that their business and customers were at risk. Throughout the back n forth, I was rather miffed about the process, but Tamir rightly pointed out that the disclosure was helpful in that it provided a third party look at the vulnerability and would help confirm what I was saying.

I couldn't live with not disclosing the affected software though, and throughout the process I had found a number of other vendors that were affected -- I'd even found Google Code Search and GitHub Search terms that pointed to a -much larger issue- and found other non-php software that was affected... I had disclosed the affected software that I knew about and the code searches to the CCIRC during the development of the information notice, but they hadn't named names. I'm sure they had their reasons, but I had the public to think about. How were mom and pop merchants going to know to patch? What about their customers credit card details and order information?

I decided to take the list of companies I knew had been notified, and who had had time to resolve the issue, and to disclose about the vulnerability in their software. That resulted in ... but that isn't even close to a complete list, and CCIRC is to my knowledge still working on the issue of the GitHub and Google Code Searches. For my part, after the disclosure I continued to work with PayPal and others to get them patched up. PayPal is still working on this issue in their API's and towards merchant disclosure -- which makes the Boneh et al paper so troubling. I never spelled out how the other PayPal API's were affected because the issue is still being resolved... responsibly.

So that brings me to today. Dan Boneh (a well known academic cryptographer) and a number of colleagues have published "The most dangerous code in the world: validating SSL certificates in non-browser software" ... I had been given a headsup in August about the paper coming out and I reached out to Dan....

[Normally I wouldn't share publicly an email I sent, but since I never got a reply, well, its just my own writing....]


Sent on 08/15/2012

Hi Dan,

I was linked this morning to by
<redacted> this morning. Just wanted to let you know that this appears to
be exactly like my prior published work on peerjacking,
( and that informs Government of Canada, CCIRC
cybersecurity notice IN11-003.

I am still in responsible disclosure with a number of vendors on this,
including undisclosed vulnerabilities in other PayPal sdks, and non-php
related vulnerabilities, to which I believe you may be making reference.
I have been withholding a full whitepaper on this research while the
industry addresses and resolves these problems. This issue affects a
wide swath of the credit card processing industry and represents a
threat to critical infrastructure. I would appreciate your team not
disclosing the vulnerabilities in specific software before all
responsible disclosure avenues have been properly exhausted.

I am working with CCIRC and others to responsibly disclose this


Kevin McArthur

I never got a reply, and I even blogged about it before it was released, but today I've noticed that the paper has been made available in PDF to the public. I've read it, and I'm cited in it, but I've made it clear previously that peerjacking affects more than PHP applications and that I've purposefully not disclosed the details of the research due to the ongoing responsible disclosure process. To see this research published within the academy and in the face of my plea to respect the responsible disclosure process troubles me.  However, please know that I tried to make sure this vulnerability was resolved before it was widely disclosed.

Welcome to the world of peerjacking. SSL is dead.


Subscribe to RSS