23 May 2018

Things you probably don't want to do on your [airline] website's payment pages

Web User Tracking and Privacy

The recent news coverage of how certain companies have used Facebook's graph APIs to hoard user data have [slightly] raised people's awareness of the lack of privacy online. This has led to some good and healthy discussions about online data collection vs user privacy. While it is not a new thing that companies use a wide variety of tools to gather as much data as possible about their users, many users have been blissfully unaware of the extent of how much they are tracked when they do anything online; whether shopping, making airline/hotel/car reservations, or just reading a news article.

While it is now established that many companies use a wide variety of tracking tools on their websites and in their mobile apps to gather more information about their customers and how they use these websites, the level of detail of how much data is gathered and how many people have access to that data may still be fuzzy at best to the average website user. Most of these trackers are third party tools from companies who are happy to gather and analyze huge amounts of user and user behavior data. The output visible to these tracking companies' customers (i.e. website owners) will often not show the detail and granularity of the data gathered and stored. While the tracking tools may collect a lot of very detailed data about website and app users, the "visible part" is often limited to colorful charts and reports showing aggregate data for a large number of users. Behind the scenes, it can be a lot more invasive with logging details down to including every mouse and keyboard input done by a website user.

User tracking and the detrimental effects it has on user privacy is an interesting topic, but on top of privacy concerns, the use of tracking tools can often introduce other unintended security issues that may violate industry regulations such as PCI-DSS and regional/local legal requirements like GDPR.

Airline IBEs and PCI-DSS

I recently had a quick look at a few well known airlines' internet booking engines. Like many others, airlines like to use online trackers to get a better understanding of how users use their websites, and where they can make changes and tweaks that will increase sales and boost revenue. This is very often done using third party tools where javascript and/or other resources are loaded from a third party controlled website/host. Very often, other third party hosted resources such as js frameworks hosted by CDNs are also used rather than locally hosted resources.

Setting the obvious privacy concerns aside, a more interesting aspect is when these third party hosted resources are used on payment and checkout pages: pages where e.g. credit/debit card data is processed. This of course introduce additional concerns regarding PCI-DSS compliance since all those third party scripts will have access to payment data entered on those pages, and unless the right precautions* are taken the same scripts can be modified by third parties at any time without alerting users or website administrators of the site that use those scripts to the fact that the code actually running in users browsers is no longer the same as the same code that was originally intended to be used.

* = SRI - Subresource Integrity can be used to preventing modified third party code from running in modern browsers. It is advisable to deploy together with a content security policy (CSP) which can define what sources should be trusted for different types of content along with a URL that modern browsers can automatically report policy violations to.

While end users may be unaware of e.g. PCI-DSS and other data protection requirements, I would expect that medium sized and large airlines would have technical staff who should be aware of the fact that a third party script loaded on a payment page will have access to any data present on, or entered, on that page. The same companies probably also have one or a few people involved in PCI compliance, and although PCI compliance is often handled by people with limited technical skills they should at a minimum be aware of the risks associated with dynamically loading third party code from third party resources. However, when looking at some airline websites' checkout/payment pages, I almost get the impression that even some of the most basic PCI-DSS compliance requirements have been set aside for the benefit of user tracking and data gathering performed using third party tools.

A second interesting aspect is that these sites often use EV / 'extended validation' certificates that make their company name show up in green in browser address bars. While this is intended to make it clear to users which company/entity they are interacting with and instill trust that they're interacting with the correct entity, the current implementation of EV certificate indicators in most web browsers don't make it clear to users when the same page include resources loaded from and controlled by third parties.

Some of the PCI-DSS requirements that appear to me as possibly overlooked by at least a few of these airline sites are:

  • 2.2.5 - "Remove all unnecessary functionality, such as scripts...": In my opinion many of these third party scripts absolutely fall in the scope of 'unnecessary functionality' for a payment page. If the script is not directly needed for successful processing of those payments, it doesn't belong on a payment page.
  • 3.2.2 - "Do not store the card verification code or value...": Those who have a generic key logger running on the credit card page would appear to be in violation of this requirement...
  • 4.1 - "Use strong cryptography and security protocols": Many of these third party hosts support older insecure versions of TLS/SSL and/or insecure ciphers. Even if they don't directly transmit cardholder data, this can potentially be used by unauthorized third parties to manipulate script content while in transit.
  • 6.3.2"Review custom code prior to release to production or customers" and 6.4 - "Follow change control processes...": How can you review code and follow change control procedures for code that is dynamically loaded by each end user system from third party controlled hosts? Without using SRI (which very few airlines do), there is no way to know when third party scripts/content has been modified or to block modified content from loading.
  • 6.5.1 - "Address common coding vulnerabilities in software-development processes..." / "...injection flaws...": This practice is by definition code injection from third party controlled systems.
  • 12.8 - "Maintain and implement policies and procedures to manage service providers": Does anyone at these companies even have an idea of how many third parties have direct or indirect access to their customers' payment data through inclusion of third party hosted scripts?

Examples

Without making this rant too long, I will just let a few screenshots talk for themselves.

The screenshots below are from the payment pages of a few well-known airlines, and on the right hand side of each one is the Chrome development tool window (which you can launch by pressing F12 while using Chrome) showing which scripts and other resources are loaded by the same page, and from which site/host it is loaded. I really wonder if all those third party sites are involved in the same airlines' PCI-DSS certification programme and audits, and if each of them are PCI compliant.

Alaska Airlines



Emirates



Etihad



Finnair



Is that a keylogger on your credit card form, Finnair, or are you just happy to see me?

What does PCI-DSS requirement 3.2.2 say about storing CVVs?


Frontier Airlines



Jetblue



Korean Air



Lufthansa



SAS



Silkair



...the list can go on and on, but I think that's enough examples of payment pages littered with excessive third party content.

None of the airlines above have a defined content security policy (CSP) to control which third party sites are valid sources for loading scripts and other content, and none of them use subresource integrity (SRI) to protect users from scripts that have been tampered with or otherwise modified while hosted on third party sites.

The next time you are going to pay for something online, press F12 to launch your web browser's developer tools, and you can see for yourself what third party resources are included in the payment page.

I will include just one more example, this time from an airline that has implemented their payment page in a more-clean-than-average way, with only a minimal set of scripts loaded from their own servers, Visa's servers, and from an Adobe server with a qatarairways.com hostname:

Qatar Airways



FAQ

What's the problem?

TL/DR: Some airline websites make excessive use of third party scripts/CSS/html hosted on third party sites/hosts not controlled by the website owner, which in turn make them exposed to potential vulnerabilities at those third party sites. In other words: they expose a larger than necessary attack surface. When this is done on payment pages, it increases the chance that they may leak their customers' credit card details to unauthorized third parties.

I'm responsible for an airline website that does this - what is the worst that could happen?

Someone: either an authorized rogue user at a third party organization, or an unauthorized person who have found a weakness or backdoor that can be used to make modifications to one of the third party hosted scripts (or CSS files) can modify one of the scripts in order to make it capture credit card data and funnel it elsewhere. When discovered, the credit card companies will invite you to pay stiff penalties for the breach if you want to continue processing credit card payments, and depending on where in the world you are located/based you may also be legally required to issue a breach notification. This will inevitably lead to negative publicity for your organization.

Has this ever caused a problem in the real world?

Yes, it has. Not too long ago, Delta had customer credit card data exposed by a third party script loaded on their site as part of a chat help tool:

What can I do as a site owner to limit the ways I expose user data to third parties?

  1. Use common sense and limit the number of third party resources you load from third party sites/hosts to the minimum set needed.
  2. Implement CSP to control which third party sites can be referenced from your site, and use SRI to block third party content from loading if it has been modified/tampered with.
  3. Involve your internal or trusted external technical resources in compliance reviews. Don't just rely on checklists and scan tools for e.g. PCI compliance.
  4. Make yourself aware of, and follow industry best practices to protect your (and your customers') data.
  5. Use common sense. See #1.

23 March 2017

Dear Emirates, your mobile app has a security problem

TL/DR - the Emirates mobile app included an empty override for the OS built-in X509TrustManager, making it accept any TLS/SSL certificate served by a man-in-the-middle, and thereby exposing customer data, credit cards, etc for mobile app users to third parties. That has been fixed now. See updates added at the bottom of the blog post for details.


Dear Emirates,


I fly with you a decent amount of miles every year. I do that because you provide good service, good food, good entertainment, all at a good price, and I sometimes need to fly to places that you fly to. I just got back home from a 6h+16h+16h+6h roundtrip on EK flights on your A380s, and I had fun all the way. In short, I am a happy Emirates customer.

Around a month ago, I received an email from you, pitching your mobile app, so I decided to give it a try. I think my habit of flying with you puts me within the demographics targeted by your mobile app, and I was hoping that the app would be useful to me.

Now, most users will just install a mobile app and use it if it provides a friendly enough user experience and user interface. However, I like to dig a bit deeper before I trust any mobile app. The underlying reason for that is that I think the mobile app industry is a bit of "wild west". Or maybe a better analogy would be "Vietnam war". A lot of companies desperately want to have a mobile app, and as a result they will hire any company to build one, regardless of if said company have the competence needed to build a good and secure app.

One of the first thing I checked after installing the Emirates app was if transport security was properly implemented. It turns out, it was not. At least not the Android app I installed on my not-too-ancient Samsung device. In short, transport security is a mechanism for protecting data by encrypting it as it travels over the internet often referred to as https, ssl, or tls, and is used to ensure that a web browser or app is getting data from and sending data to the correct servers and to protect that data from eavesdropping.

I immediately raised this to you through several channels: email, twitter direct messages, the app's feedback form, and public tweets. However, it seems like my feedback didn't make it to the right people within Emirates. I know you have a lot of competent people working for you in a variety of functions, and I hope that applies to information security as well.

Do you have an inhouse infosec department? If so, please have them review your mobile app. They will understand what this is all about.

A more detailed explanation of the problem your app suffer from follows. I will try to explain this in not-too-technical layman terms to make it easier to understand for non-technical people.

Whenever you access a website serving or consuming data that need to be protected, that is usually done over a secure protocol a.k.a transport security, sometimes referred to "SSL" (now deprecated), or TLS. When a web address starts with "https", the web browser you use will connect to the web server using a (usually) secure encrypted protocol. One key part of establishing this connection is verifying the identity of the server you're connecting to. This is done using something called a "certificate", "SSL certificate", or "TLS certificate". A certificate is usually signed by a third party, a "certificate authority" or CA. Their cryptographic signature guarantees that a web server or app server belongs to the domain it claims to belong to.

Your emirates.com website has a certificate, and likewise does your mobileapp.emirates.com server which provide your mobile app with access to your reservation system, your CRM system, to Boxever, etc.

If you open the emirates.com website in a web browser, you will see that the address bar shows a padlock icon, and that the address has a "https" prefix. This means that the connection is secure, and that the certificate presented by the site is valid and signed by a CA trusted by the user's browser and device.



Now, if a bad actor would try to intercept traffic between a user's web browser and your website, they could try to do so using a man in the middle (MITM) proxy. In short, this is a program running on a system somewhere between the user and the web server they're trying to access, but it terminates the secure connection and reestablishes a new secure connection to the server. This allows it to read all communication between a browser and a web server. However, this is only possible if the web browser, app, and device used will trust the certificate served by the MITM proxy. If not, the web browser will show a warning to indicate that a secure connection could not be established.

MITM proxies are not uncommon on corporate networks, on public wifi networks such as the ones available to hotel guests, at airports, in coffee shops, etc. A correctly configured device, with a modern browser or correctly implemented app using transport security will normally warn a user if a MITM proxy is trying to intercept secure connections. For unsecure connections, the user is generally unaware that their traffic is getting intercepted.

If a user tries to access the emirates.com website from a mobile through a MITM proxy using a self-signed certificate, or a certificate signed by a certificate authority that is not trusted by the device, the web browser will display a certificate validation warning that looks something like this:



The same thing applies to mobile apps: if a mobile app need to communicate with a backend system somewhere on the internet, that should be done over a secure protocol. This is usually done using the same https (TLS) protocols used by web browsers, with the only difference being that apps does it "behind the scenes" without showing the user an address bar. The Emirates mobile app communicates with servers such as mobileapp.emirates.com, boxever.com (for tracking user behavior), and a few others.

Some mobile apps do this in a correctly implemented way; they validate the certificate presented by the backend server, and ensures that it was issued by a certificate authority trusted by the device the app is running on. Some don't. Your mobile app is one of those that will trust any certificate, regardless of if the issuer is trusted by the device or not, including self-signed certificates.

The following screenshots shows how Google Maps, Google Play Store, and Uber react if they are served an invalid certificate; those apps will show the user a warning and will not attempt to communicate with a server whose identity appear invalid or can't be verified:


What is shown in the screenshots above is the correct way for an app to act if it is served an invalid certificate.

The Emirates app however, will under the same circumstances, when connecting through a MITM proxy serving an invalid certificate, happily continue working without showing the user that anything is wrong:


Behind the scenes, the app establishes TLS connections without validating certificates, and exchanges information that should have been protected. It does so even if all communications goes through a MITM proxy, and it does so even if that MITM proxy serve invalid certificates. Some of the data in your "skywardsDomainObject" data even expose things that you probably want to keep internal to your CRM system, things like Emirates staff user names for internal systems, Skywards "member score", passenger importance rating, downgrade/upgrade eligibility/non-eligibility, etc. In the screenshot below, among other details you can see that my "memberScore" value is 05.3228, and my account was at one point updated by a user with the login "CRISOPS", and my "impRating" is "NIP", etc.




Dear Emirates, this is not good. This is especially bad if an app handle (and exchange) sensitive data about a user, things like their travel plans, travel history, travel documents, credit cards, etc. Your app handle that kind of data, data that should be protected when it is transmitted over the internet.

The credit card industry has a standard called PCI-DSS that was created to ensure that companies accepting or processing credit card payments protect card holder data in an appropriate manner. One of the requirements of PCI-DSS specificially state that server certificates must be validated before sending payment/card data over the internet. Dear Emirates, your app fail on complying with this requirement, and shouldn't in its' current state be trusted with credit card details.



As mentioned, I have tried to raise this to Emirates through several channels over the course of a month, but seemingly without success. This is another attempt. Hopefully this blogpost can make its' way to the Emirates infosec team, so that the app can be fixed and that I and others can use it without exposing our data to unknown third parties.

Meanwhile, if noone on your side read the feedback submitted through the app's own feedback form, you may as well remove that feedback form while you're at it:


Dear Emirates, you have my contact details. If you need any more information about this, please reach out. I want to be able to use your app, but I can't do that until you fix this (and a few other) bugs or implementation flaws. I am happy to provide your team with additional details on what they need to do in order to make your app as good as your inflight service.

Update #1

Update #1: It looks like this behavior was implemented intentionally. Hey, Emirates and Tigerspike, if you want to fix this, have a look at this...


Update #2, 26 March 2017:

The original content of this blog post will be returned, after the issues raised in it has been corrected by Emirates. They are working on a fix, and a corrected version will probably be available within a few days.

In the meantime, if you are a user of the Emirates app for Android, and if the version you have installed has a version number 2.7.0 or less (or released prior to 26 March 2017), I would recommend that you do the following:
1) Uninstall the app temporarily. Reinstall it after a new version has been released by the Emirates/Tigerspike team.
2) Log in to your Skywards account on emirates.com, click on "account settings", and change your password.
2b) If you have used the same password elsewhere, change it on all sites/apps/etc where you have used it. ...and stop reusing passwords across sites...
3) If you have paid for a booking through the mobile app while connected over public wifi at a hotel, airport, coffee shop, or similar: ask your credit card issuer to issue a new replacement card with a new PAN/account number for you.


Update #3, 2 April 2017:

It looks like Emirates have released an updated version of the app, dated 29 March and available through Google Play store on 2 April, with a fix for the 'missing' (overridden) certificate validation.

Some other minor issues raised below (CRM data leaks, etc) remain, but it is my understanding that they are working on those as well. Since the main issue originally raised has been addressed, the original contents of this blog post have now been restored, after being offline while EK worked on a fix (read: removed the few lines of code I pointed out in 'update #1'). Hopefully someone else will find this useful if they're planning to build a mobile app for their customers.

I have not yet seen any official communication from EK to app users regarding this issue, but I do hope that they will at some point contact their users with information on what happened, and steps users need to follow to protect accounts that have or may have been exposed to third parties as a result of this. In the meantime, if you have used the app, see "update 2".

I still do not have contact details for the EK infosec team, but I hope they will learn from this and make it easier to contact the people who are responsible for handling this kind of issues. Clearly, the usual contact channels are not suitable for reporting infosec issues of a technical nature.

Finally, I think some of the good advice raised by Troy Hunt in one of his recent blog posts would apply here as well, especially when it comes to making it easy to report security related bugs and ensure those reports reach the right people:
  • Make it easy to submit security reports
  • Treat security reports with urgency
  • You must disclose
  • Disclose early
  • Protect accounts immediately
  • Avoid misdirection and false or misleading statements
  • Don't be vague
  • Explain what actually happened
  • Keep customers updated
  • Apologise
See:
https://www.troyhunt.com/data-breach-disclosure-101-how-to-succeed-after-youve-failed/


Update #4, 18 April 2017:


Still no communication from EK to app users. I guess they decided to file this one under "Let's pretend it never happened."

30 September 2016

Why do some companies use so many different domains?

One thing that I find confusing is that some companies use a whole lot of different domains for different but related services and products. Depending on their line of business, this can sometimes make it tricky to know: am I communicating with the right company or with someone else? In the case of online merchants, banks, credit card issuers, etc this can get even more confusing and makes users more vulnerable to phishing and other deceptive practices.

Below are examples of domains used by two local banks here in Thailand, UOB and SCB. UOB has actually had a few more "phishing style" domains in the past, but they have retired e.g. "uobcyberbanking.com" and a few others.



The examples above are actually all legitimate domains, owned by respective bank. However, for a customer or end user, it can be very difficult to know which domains/sites are legitimate, and to distinguish legitimate domains from a fake phishing domain pointing to a spoofed website when so many different domain names are used in parallel.

I was wondering if even the banks' own staff can spot the difference, so I made an experiment. In the image above, there is an "scbcreditcard.com" domain that belong to the bank. A quick check with a domain registrar revealed that the [possibly better named] domain scbcreditcards.com was up for grabs for a few dollars.



Would an employee at the bank, say a customer service representative, know which one of the domains scbcreditcard.com and scbcreditcards.com is fake and which one is real?

I registered the domain scbcreditcards.com, and simply made it redirect to the bank's real site. I then sent off a baited question to the bank on Twitter:





The bank eventually replied, but the reply was even more confused. I guess the person operating their twitter account doesn't even know what an internet domain name is, because they replied that only scb.co.th is the only domain name they use.

This is clearly not the case, as they in fact use many more domains as seen in one of my screenshots above. Had this been true, that they only used scb.co.th, that would have been good, and I wouldn't have written this blog post in the first place.

Regardless of the bank's confused answer, it is incredibly difficult for me and other customers [of banks and other companies] to spot a tiny difference like that, especially when so many different domains are used by the same company for their different online services.

In this case, at first I made the newly registered domain scbcreditcards.com redirect to the bank's own (legit) site scbcreditcard.com, but I could have pointed it anywhere, as phishers and other scammers do. I later redirected it to this page, and finally to HIBP.

Being in control of the domain scbcreditcards.com also means I can buy an SSL certificate for it. Just for the sake of testing/demonstrating this in action, I spent another $10 for a DV certificate for the same domain. I wonder if the CA has enough checks in place to catch this...

Note: the bank's legitimate site at http://www.scbcreditcard.com doesn't even support https in the first place, which is a bit weak for a site in any way affiliated with a credit card issuer. Even if the site doesn't provide any access to cardholder data, I would expect a site like that to do https only, with HSTS.

I think it would make a lot of sense for companies to stick to one main domain, and if needed use subdomains under that. If all UOB's services was under "uob.co.th", and all SCB's services were under scb.co.th then it would immediately be more difficult for phishers to set up fake websites under spoof domains.

In the meantime, consumers will have to try to figure out on their own whether a website they're accessing is legitimate or not, and some will continue to fall for spoof/fake/phishing sites. Companies that set up a new domain for every department/product/service is partially to blame when their customers get tricked; it is simply not possible for end users and consumers to spot the difference between a legitimate site and a fake site when the same company use 5 different domains for closely related services.

Does your company have too many different domains? Why? Would it make sense to consolidate them?

02 July 2015

Do you know which CAs can issue SSL/TLS certificates trusted by your PC or phone?

Most PC and phone users are blissfully unaware that their PC or phone have a very long list of trusted root CAs, certificate authorities that can sign SSL/TLS or code signing certificates that will be accepted at face value. Those root CA lists are regularly updated, most recently all Windows PCs silently got a bunch of new trusted root CAs from the Chinese government, India CCA, etc.

In other words: a few hundred organizations that you have probably never heard of, and a few thousand organizations trusted by them, can issue certificates that is trusted by your web browser, mail client, and used for signing software. Any of them can issue a SSL certificate for any web property, and sometimes they do issue certificates to the wrong party. When a certificate is issued to someone else than the legitimate site-owner, it opens up for man-in-the-middle attacks where an unknown third-party can intercept and modify communication between a web browser and web server.

To reduce the risk of getting man-in-the-middle'd by someone who got a certificate from one of those CAs (or the thousands of intermediates trusted by them), it is a good idea to regularly trim the list of trusted root CAs on your PC or phone so only the ones you really need are trusted. It is relatively easy to update the list of trusted root CAs on a PC or phone, the following two infographics shows how to trim trusted root CAs on Windows and Android, respectively.

Windows


Android




29 June 2015

RandomCards: a simple low-tech method for managing strong passwords

Passwords, always a problem


Everyone with access to a computer has problems with passwords. Repeatedly, whenever some website is hacked we're reminded that the majority of all people use weak passwords like "P@ssw0rd1" or their dog's name. Whenever some website is hacked, users with weak passwords or those who reuse their passwords are the first victims, and often have their data compromised at other systems than the ones that got hacked.

Others try to use strong passwords, but the elements that make a password strong such as length, entropy (randomness), and not being based on a dictionary word in the first place usually make strong passwords very hard to memorize. This is often overcome by using a password manager, or by simply writing the passwords down in a file or on paper.

Unfortunately, password managers are sometimes compromised too, for good reasons. Why would hackers not put extra effort into breaking into a system where thousands of users store all their credentials? They do, and as a result it happens that even top-of-the-line password managers are sometimes breached.

Good password managers are designed to make life harder for hackers by employing layer after layer of strong encryption, yet if a user's master password is not strong enough that user is at risk of having all their passwords leaked anyway.

Writing passwords down in a file or on a paper is equally insecure, as has also been shown repeatedly. When the French TV channel TV5 Monde was hacked recently, it turned out that not only did they use a combination of very weak passwords such as "azerty12345" ("qwerty12345"), and "lemotdepassedeyoutube" ("password for youtube"), but they also posted them on post-it notes all over their offices.

Of course, they're not alone on using post-it notes for passwords:
Personally, I am not a fan of password managers, especially the "online"/cloud based ones where you store all your credentials in one central location, trusting a third party to ensure that unauthorized users can't access them.

I don't claim to have a solution to any of the issues surrounding passwords, but being sceptical of password managers and still wanting to use sufficiently strong passwords myself, I have put together an experimental app for generating and printing pocket-sized cards with random content that can be used to derive passwords, "RandomCards".

RandomCards


RandomCards is a small app that will generate large random numbers using cryptographic random number generators, convert them to printable/human readable characters (Base64), and print out 10 wallet sized cards with random characters on a sheet of A4 or Letter paper.



The RandomCards app has a fairly simple user interface: choose which RNG (random number generator) you want to use, how many pages of RandomCards you want to print (with 10 cards on each page), hit the "Print" button, select target printer, and it will print out your cards. Each card has a small unique icon to distinguish it from your other cards, so you can keep a stack of them together and still be able to distinguish the cards from each other.


The list of random number generators available in the app depends on which RNGs you have on your system. On a baseline windows system with no TPM, you may see only Microsoft's CSPs. If you have a TPM ("trusted platform module") installed, you should be able to use the TPM's hardware-based random number generator. The default option is "All Available RNGs", which will generate random numbers using all installed RNGs and XOR them together. This should result in a random sequence at least as strong as the strongest RNG, regardless of if any of the other RNGs are weakened/predictable.




Although this can resemble some kind of "post-it notes on steroids" password manager, the idea is that these cards contain enough entropy to be used for strong passwords, and since you can read them in any direction you want they are much more difficult for an attacker to figure out your password if you lose them than an ordinary password note or file.

Print, laminate, and keep a sufficient number of cards in your wallet. The cards are wallet sized for a good reason, and if you make up your own technique for reading them ("red pineapple card, start at J5, read diagonally up for 18 characters is for xyz.com") then they're going to provide you with strong passwords without having to memorize a full long random password, while making no sense to someone else if you lose your cards.

Change around your RandomCard printed cards, pick a starting point that you can memorize, pick an arbitrary reading direction (up, down, left, right, diagonally, diagonally pairwise, zig-zag [up/down/ltr/rtl] etc), pick an arbitrary password length (12 characters or longer), and each card offer a very large number of combinations of fairly strong* passwords.

* = Remember, since the random data on the RandomCar cards is base64 encoded, every 3 characters of a RandomCard password correspond to 2 bytes or 16 bits of entropy, so a 12 character string from one of these cards are equivalent to 8 bytes or 64 bits of entropy, or 1 in 18,446,744,073,709,551,616 for someone who have no access to your password cards.


Download the app (or source code)


If you want to try out or use RandomCards, you can download the app from https://apps.huagati.com/download/RandomCardsApp.zip, or the source code for it from https://apps.huagati.com/download/RandomCardsSource.zip

The app requires a PC with .net 4.0, sufficient user privileges to use the random number generators installed on the system, and a printer.

As always: provided as-is. No warranties (expressed or implied). Use at your own risk. Batteries not included.

Feedback, comments, questions? Post it in the comments section below.

18 May 2015

IKEA shows how NOT to do passwords...

A few weeks ago, I took my family to the local IKEA store here in Bangkok to pick up a few pieces of furniture. I am generally not a fan of loyalty programs offered by shops/banks/airlines/etc, but I made an exception and joined IKEA's "IKEA family" program to see if I would get any discount* on the items I purchased. (* = Nope, I didn't.)

When I got home from an out-of-town trip yesterday, there was a letter from IKEA containing a welcome letter and my member card. The first thing that caught my eye was the third line in the welcome letter: "Your login password: Your date of Birth (DDMMYYYY)". My WHAT? That doesn't seem very secure, does it?



I opened up my browser and went to their site to have a closer look. Right on the login page was a password reminder link, which I clicked. That opened up a message box confirming that they do indeed use your date of birth as a password, but even worse: the wording of that password reminder even suggests that you can't even change your password later. After logging in I couldn't find a way to change the password or DOB, so I think you're stuck with your DOB as the password for your "IKEA family" account...



What's wrong with using your date of birth as a password?


Why is this bad, you say? Not everyone knows my date of birth, right? Well, unfortunately, it is very easy for a computer to test all possible combinations of someone's date of birth and make automated requests to login pages like the one used by "IKEA family". There are after all only 36525 possible date combinations in a 100 year timespan. If we assume that most "IKEA family" members are between 17 and 85 years old, that drops to 24837 combinations. That is way to easy to bypass, and in a real-world attack each member account would (on average) require about half as many attempts before the correct DOB is found: just 12k requests per member account. This can be done in a very short timespan (seconds) by your average home computer or smartphone.

Now, someone may argue that this is the password for a membership account with a 16-digit membership number, a membership number which would be hard for someone else to guess. That may be the case, it looks to me like the membership number starts with a 999320 prefix, followed by zeroes, and then a 6-digit membership number. Based on how the number is formatted, I would guess is that those membership numbers are issued in sequential order, which would make it easy to automate a brute-force attack. An attacker could start at 9993 2000 0010 0000 and work his/her way up through the account list.

An automated brute-force attack would probably need to make somewhere between 5-8 billion requests to the "IKEA family" site to retrieve all members' data. This may sound like a lot, but for a computer it is not very hard work at all to make a few billion http roundtrips over the span of a few days...

HTTP only


As an added bonus: the entire site, including the login page, use plaintext http instead of https. Whenever you access a http-only site from an open wifi-connection or a compromized network you are sharing your information with whoever may be listening in.

What's at risk?


IKEA family is just a loyalty program, where you can collect bonus points and get discounts on items in their stores. Fortunately, there doesn't seem to be a way to tie a credit card or bank account to it [yet], [in this country].

What is the risk if someone compromise an IKEA family member account? PII: Personally Identifiable Information. When you sign up for an IKEA family membership, they ask for your name, address, email, DOB, ID card or passport number, mobile phone number, family details etc. I shared that information with IKEA, but I may not necessarily want to share it with a hacker in China, or Russia, or elsewhere. Likewise, IKEA may not want to share their customer data with hackers who may use it for phishing, or even resell it to competitors.

I immediately updated my profile and changed name/address/etc to dummy data, and I will email IKEA in a short while and ask them to delete my "IKEA family" account until they handle my (and other members') information in a more responsible way. Maybe I will even join "IKEA family" again in the future, if they become more responsible with how they handle member data.



In addition to accessing your PII, the site also allow you to redeem bonus points and to review transaction history (including previous purchases at IKEA stores).



I had a quick look at the login pages for "IKEA family" sites in other countries, and it looks like the IKEA family program's website is implemented differently in different countries. The IKEA family sites in nearby Singapore and Malaysia appear to be identical to the one used by IKEA Thailand, while the one used by IKEA Sweden appear to be a bit more secure.


Dear IKEA, ...


If anyone from IKEA happens to come across this, please have a look at how the online version of your "IKEA family" loyalty site is implemented in some countries. You are making your membership data easily accessible to hackers and (potential) evil-minded competitors.

If whoever built the "IKEA family" site is this sloppy with passwords, there may of course be other weaknesses as well. If you change the way you handle authentication, you may also want to spend a bit of time on looking into other security aspects of your site.



Passwords


To everyone else: your date of birth is not a good password. Neither is your grandmother's date of birth, your dog's maiden name, or "p@ssw0rd69". Don't do it, especially if you are using it to protect other people's PII. If a site you are using insist on using a weak/bad password, reconsider if you really want/need to use that site and limit what information you share with it.

06 June 2014

FSecure's FreedomeVPN - what does "tracking protection" really mean?

Since my previous blog entry on FSecure's "FreedomeVPN" app and how it after my previous test didn't block Google's tracking cookies, there have been a couple of conversations on Twitter on this matter. One such conversation took place last night, when twitter user @PrivacyMatters referred them to my blog post and asked for FSecure and Mikko's take on it. It went something like this:


Interesting... they apparently disagree that their failure to block Google's tracking cookies* is not really tracking, or maybe that Google is not a tracking company, or maybe that Google is not in the advertising or selling-user-data to-advertisers business.

* = Google's persisted tracking cookies include a long unique number assigned to each visitor, and identifies each site visitor, where they came from, and on Google's end can be matched up to everything else that Google knows about that user.

So... maybe I am just to picky. Maybe Google don't track users, and their tracking cookies is not part of FSecure's "untrackably invisible claim". This morning I decided to take FSecure's FreedomeVPN for another 3-minute test just to see how their tracking protection measure up, and if I am maybe just too picky.

This time I decided to simply check if Facebook is able to track me around the web with FreedomeVPN's tracking protection is active. You may have noticed that many sites around the web have embedded Facebook Like boxes.

The Facebook like box shows if you and any of your friends have clicked "like". It comes in a few different shapes and sizes, but whenever it is present on a site, it means that every time you visit that site Facebook will know you did so through the use of their own tracking cookies. This seems like something that I would expect Freedome's "tracking protection" to block.

If Facebook is unable to identify you, the tracking box will state how many people have clicked "like" on the site and show some random profile pictures of people who have clicked like:



If Facebook is able to identify you, the tracking box will show if you at any point have clicked like, and it will show your own profile picture and profile pictures of any of your friends that have done so too, rather than profile pictures of other random Facebook users:



My test today was simple, and as follows. I installed FSecure's FreedomeVPN again (to ensure I had the latest and greatest version installed for the test):

*

* = Note the "become untrackably invisible" slogan is still there...

After installing, I activated all the protection features, including "tracking protection" with an exit point in Finland.



Surely I was now "untrackably invisible" to all datamining and advertising companies..? Easy to find out, just hit a (non-FB) website with an embedded FB asset*.

* = FB like button/like box/share button/login button/etc all work the same and come with the same tracking features.


Lo and behold. Despite having all FreedomeVPN's "anti tracking" and now being "untrackably invisible", Facebook was able to identify me when visiting a third party site. This means they are able to track me on ANY site around the internet that has an embedded FB Like, Share, Login, etc button embedded.

Sorry, FSecure and Mikko Hyppönen: I think we have different views on what "tracking protection" and "untrackably invisible" means.

After the test, the FreedomeVPN control panel says they have blocked one tracking attempt. Obviously not the one from Facebook, but maybe some other more obscure tracking service in that case...?


This app doesn't seem to do what FSecure's marketing claims it does, so I will uninstall it for now. Maybe I will try it again some time in the future if FSecure's developers catch up with what their marketing team's claims.