29 October 2019

Do Central Group care about customer privacy and data security?

Central Group is a large Thai company with its roots in retail. They own a chain of department stores and grocery stores and have expanded into many other industries over the years.

Like elsewhere in the world, traditional brick-and-mortar retail stores face increasing competition from online retailers, and Central is no different. In the past few years they have invested heavily into online shopping, and according to a Bangkok Post interview with their CEO they aim at being the largest online retailer in Thailand by 2021.

This is an impressive goal.

However, running a large e-commerce site is not only about maximizing sales and collecting money from customers, it also comes with responsibilities such as protecting customer data.

My online PowerBuy shopping experience


One of Central's store brands is called PowerBuy. Their department stores have PowerBuy outlets where you can buy TVs, washing machines, rice cookers, air conditioners, cameras, mobile phones and other household electronics and appliances. They also have a website where they sell the same goods online.

PowerBuy store in a mall


A few days ago, I was shopping around for air conditioners to replace two ten-year-old failing and noisy units. After visiting two nearby stores I decided to try buying them online instead, so I went to PowerBuy's website at www.powerbuy.co.th .

The shopping experience was similar to most other online shopping sites: you select the goods you want to buy, select quantity, and go to checkout.

There was a small hickup at checkout since my initial order combined air conditioners and fans, something their ordering system apparently cannot handle. After removing the fans from the order, I was able to place an order for two air conditioners.

When it was time to pay for my order I was redirected to a third party payment processor that PowerBuy/Central have outsourced their payment processing to. Outsourcing payment processing is probably one of the smartest decisions they could have made for this site.

After completing the payment, I was redirected back to PowerBuy's website where I got an instant order confirmation.

I noticed something interesting: the redirect took me to a page that took a single order number parameter and displayed not only my order but all my contact information: email address, name, billing address, shipping address, etc. The order number itself looked a bit too simple: it consisted of "PWB" followed by today's date, followed by a few additional digits that appeared to be a timestamp.

PowerBuy's order confirmation page



This looked weak, so I sent a Twitter comment to PowerBuy's social media team to alert them that they may have a weakness on their order confirmation page. This looked like something that would expose order and customer data to screenscrapers and bots.

Others chimed in and added another Central group twitter account and commented that they have also noticed similar issues elsewhere on Central owned websites.

Wait, it gets worse...

After not seeing any response at all from PowerBuy or Central after a couple of days, I decided to contact them via email instead. Before doing so I had a second and closer look at the PowerBuy website. What I came across revealed that their data privacy and security issues are a lot worse than I first thought.

When I accessed my customer profile, it showed a list of my orders. The link to the order details for each order is just a URL that ends with a six-digit number. This six-digit number turns out to be the internal order ID from their database, a number that is generated in sequential order in their database as orders come in.

PowerBuy's order summary page. The sequential Order ID in the URL could be used to retrieve any order in their database.


If I incremented the order ID from my order by one I would see the order placed by the customer who made an order right after me, including all their contact details. If I decremented the order ID by one I would see the order someone else placed right before mine.

There was no user authentication in place to ensure people didn't retrieve other people's orders!

This website was wide open to order enumeration: anyone could access any order, regardless of who that order belonged to. While it would be time consuming to traverse all orders manually in a web browser, the way this was implemented made it trivial for anyone to automate order retrieval for a large number of orders by using a few lines of javascript or an off-the-shelf screen scraping tool.

A closer look revealed that the problem was in the GraphQL API endpoint used by their website. It exposes a number of API endpoints, and it didn't validate that the caller was requesting order or customer data belonging to themselves. If you were logged in to the site, you could retrieve anyone's orders, along with shipping and billing address, tax IDs, etc.

If this was in Europe, PowerBuy and Central would probably face a hefty fine for GDPR violations, given that the order confirmation and summary pages includes the customer's shipping address, billing address, email address, phone number, tax ID etc.

Not only did this GraphQL API endpoint expose customer data, it also exposed things that I would expect a retailer would want to keep internal. This included e.g. the credit merchant rates they pay to different local banks when customers pay using credit cards; these rates were sent back to every shopper on their website. Anyone with an F12 key on their keyboard can access this information; hit F12 to open your webbrowser's developer tools and browse away.

Don't blame the techies, this is an organizational failure


Some people may see this as just a bug, an implementation flaw, but I believe it is more of an organizational failure.

How could something so trivially exploitable pass code reviews, QA testing, security reviews, or even the scrutiny of a business analyst? Finding a bug like that in a test environment is understandable, but how did this make it into their production environment? My guess is that none of those processes are in place in their organization. For an outsider, this looks like an organization that has no code reviews, no QA, no security reviews. "If it compiles, ship it!"

How many other similar issues do they have on their websites and mobile apps? Personally, I would be very surprised if this is a single isolated issue.

I sincerely hope that they will not blame this on some low-level [presumably underpaid] techie who did the final implementation. Rather this should be addressed at a higher level as this is an organizational and management failure, followed by a proper security review of all their websites and exposed API endpoints, as well as implementing a way for customers and others to provide feedback to them on privacy and security issues.

Contacting Central Tech, PowerBuy, Central Retail

Since I had received no response to my initial feedback to PowerBuy's twitter account, I decided to try other channels. At first, I sent an email to Central's customer service team, and to Central Tech, the central subsidiary that is in charge of their e-commerce websites.

That email also went unanswered.

I also reached out to someone I used to work with almost 20 years ago whom I know worked for Central Tech recently. He told me that he had left Central Tech a while ago, but he would pass on my findings to them.

In parallel, I started looking for who at Central/PowerBuy may be responsible for their online endeavors. I quickly found the name of their CEO, the same person quoted in the Bangkok Post article I linked to above. However, I wasn't able to find the CEO's email address or any other contact information, so I tried a few different email addresses based on different combinations of his first and last name. None worked.

After a day I decided to try a different channel, I decided to try to send the CEO a LinkedIn message to let him know of my findings. Since he is not one of my LinkedIn contacts, I first had to purchase a "LinkedIn Premium" account so that I could send a message to someone not on my contact list. He replied to my In message within an hour with a brief "Hi Kristofer, thanks so much for this alert. I will follow up. Regards. Nicolò".

I replied and asked for his email address so that I can provide him with steps to reproduce and other details of my findings. That question was not answered, so I still don't have an email address for him or for anyone else at Central, Central Tech, or PowerBuy that I can send additional information to.

At this point, that short message "Hi Kristofer, thanks so much for this alert. I will follow up. Regards. Nicolò" was the only comment I have seen from anyone currently working at Central/PowerBuy/Central Tech regarding this issue.

Later, he replied with a lengthier response and included a contact email address so that I could provide them with more details on my concerns over their shortcomings on protecting customer data.

This again points to a management and organizational issue. Customer service didn't respond to or act on my feedback. Their social media team didn't either. They probably have no escalation path and no means to act on information security related feedback. There is no clear way to contact their tech team besides a generic "info@central.tech" email address that appear to be unmonitored.

The only escalation path that seems to work at this point is by contacting a former Central Tech employee. He has been very helpful though.

Because of how trivial it is to exploit the weaknesses in PowerBuy/Central's website and APIs, I doubt I am the first person to notice this. Others may have noticed it, some may have tried to contact PowerBuy/Central and met the same obstacles in getting in touch with them as I did, yet others may have decided to misuse these weaknesses for bad purposes.

Conclusion


I would have expected PowerBuy/Central to acknowledge the problem and to take immediate action to protect the exposed data. Yet, days later I can't see any signs that they have taken any action to do so. At the time of writing this, the website still allow anyone to view anyone else's order, billing, and shipping information.

Also, why is it so difficult to provide feedback to some organizations on security issues? It wouldn't be that hard for them to have a single "security@central.tech" email alias that could be monitored from time to time, in case someone somewhere decides to notify them about a security issue? Maybe they have an email alias for security issues, but I certainly could not find it published anywhere.

Running a large e-commerce site is not only about maximizing sales and collecting money. It also comes with responsibilities, including protecting customer data.

If Central Group are really committed to becoming the local leader in e-commerce and online retail, perhaps taking security and data privacy issues should be on the daily agenda?

I still don't have an answer to the question I raised in the title of this rant, "Do Central Group care about customer privacy and data security?". I hope they do take these things seriously, that they will work on improving their processes, and I hope they will use this case as a learning experience and improve the way they respond to feedback on security and privacy matters.

(end of rant)


Update: (6 Nov 2019) Today, I met with people from the Central Tech team who assured me that they take security and privacy issues very seriously and that they will work to ensure that they have better security measures in place to reduce the risk of future potential data leaks/exposure. The underlying technical issues that made me concerned about Central Group's ability to protect customer data has been addressed according to the Central Tech team, and the vulnerabilities that got my attention should be closed now.

When I met with Central Tech, I shared some ideas with them on how they can identify and prevent bots and screenscrapers from accessing their websites and APIs, which I hope comes in handy. I also encouraged them to make it easier for customers like myself to report concerns about security and privacy to their team, and to ensure the respective customer service teams knows how to escalate technical issues within their organization. This because it took me several attempts to contact them through different channels before I got a confirmation that they were addressing these concerns.

I hope this write-up can serve as a case study and learning experience both for Central Tech as well as for others who run large e-commerce websites that process PII data.

Timeline:
25 Oct 2019: I placed an order on powerbuy.co.th. When doing so, I noticed a couple of issues that I tried to report to their social media team.
28 Oct 2019: After not hearing back from PowerBuy's social media team, I had a closer look at the website and noticed additional security issues. I tried to report this to PowerBuy, Central, and Central Tech via a few different channels.
29 Oct 2019: Not being sure if my reports were being addressed by the PowerBuy and Central Tech team, I tried to escalate my concerns to the CEO of Central Retail. Later in the day I also made the write-up below, hoping that it would get the right attention.
6 Nov 2019: I met a few people from the Central Tech team who assured me that they take these issues seriously, that they have addressed the vulnerabilities/weaknesses that I had raised, and that they will look for additional potential vulnerabilities and continue to improve their website security.



Update 2: (19 Nov 2019) A couple of weeks have passed since Central Tech told me that they have fixed all security issues, so I am adding back the details of the data protection issues that I noticed on the PowerBuy website.

Hopefully this can be useful for someone else building e-commerce sites, to use as an example of things to include when testing e-commerce sites prior to releasing into production.

I still can't see a security contact address or bug reporting channel for Central Tech or any of the web properties they manage published anywhere on their website.

Until Central have a clear channel for reporting bugs and security issues, others who find security issues on Central's web properties may have to resort to escalating their concerns through the same channels that I used to get their attention.

I also haven't seen any notification from PowerBuy or Central to their customers regarding leaving all their customer data exposed to anyone. I think it is fair to assume that I am not the first person to notice these weaknesses, and that PowerBuy's customer/order data may already have been accessed/downloaded/screen-scraped by unauthorized third parties, and it would have been prudent for Central/PowerBuy to notify their customer that their information may have been accessed by unauthorized third parties.

23 May 2018

Things you probably don't want to do on your [airline] website's payment pages

Web User Tracking and Privacy

The recent news coverage of how certain companies have used Facebook's graph APIs to hoard user data have [slightly] raised people's awareness of the lack of privacy online. This has led to some good and healthy discussions about online data collection vs user privacy. While it is not a new thing that companies use a wide variety of tools to gather as much data as possible about their users, many users have been blissfully unaware of the extent of how much they are tracked when they do anything online; whether shopping, making airline/hotel/car reservations, or just reading a news article.

While it is now established that many companies use a wide variety of tracking tools on their websites and in their mobile apps to gather more information about their customers and how they use these websites, the level of detail of how much data is gathered and how many people have access to that data may still be fuzzy at best to the average website user. Most of these trackers are third party tools from companies who are happy to gather and analyze huge amounts of user and user behavior data. The output visible to these tracking companies' customers (i.e. website owners) will often not show the detail and granularity of the data gathered and stored. While the tracking tools may collect a lot of very detailed data about website and app users, the "visible part" is often limited to colorful charts and reports showing aggregate data for a large number of users. Behind the scenes, it can be a lot more invasive with logging details down to including every mouse and keyboard input done by a website user.

User tracking and the detrimental effects it has on user privacy is an interesting topic, but on top of privacy concerns, the use of tracking tools can often introduce other unintended security issues that may violate industry regulations such as PCI-DSS and regional/local legal requirements like GDPR.

Airline IBEs and PCI-DSS

I recently had a quick look at a few well known airlines' internet booking engines. Like many others, airlines like to use online trackers to get a better understanding of how users use their websites, and where they can make changes and tweaks that will increase sales and boost revenue. This is very often done using third party tools where javascript and/or other resources are loaded from a third party controlled website/host. Very often, other third party hosted resources such as js frameworks hosted by CDNs are also used rather than locally hosted resources.

Setting the obvious privacy concerns aside, a more interesting aspect is when these third party hosted resources are used on payment and checkout pages: pages where e.g. credit/debit card data is processed. This of course introduce additional concerns regarding PCI-DSS compliance since all those third party scripts will have access to payment data entered on those pages, and unless the right precautions* are taken the same scripts can be modified by third parties at any time without alerting users or website administrators of the site that use those scripts to the fact that the code actually running in users browsers is no longer the same as the same code that was originally intended to be used.

* = SRI - Subresource Integrity can be used to preventing modified third party code from running in modern browsers. It is advisable to deploy together with a content security policy (CSP) which can define what sources should be trusted for different types of content along with a URL that modern browsers can automatically report policy violations to.

While end users may be unaware of e.g. PCI-DSS and other data protection requirements, I would expect that medium sized and large airlines would have technical staff who should be aware of the fact that a third party script loaded on a payment page will have access to any data present on, or entered, on that page. The same companies probably also have one or a few people involved in PCI compliance, and although PCI compliance is often handled by people with limited technical skills they should at a minimum be aware of the risks associated with dynamically loading third party code from third party resources. However, when looking at some airline websites' checkout/payment pages, I almost get the impression that even some of the most basic PCI-DSS compliance requirements have been set aside for the benefit of user tracking and data gathering performed using third party tools.

A second interesting aspect is that these sites often use EV / 'extended validation' certificates that make their company name show up in green in browser address bars. While this is intended to make it clear to users which company/entity they are interacting with and instill trust that they're interacting with the correct entity, the current implementation of EV certificate indicators in most web browsers don't make it clear to users when the same page include resources loaded from and controlled by third parties.

Some of the PCI-DSS requirements that appear to me as possibly overlooked by at least a few of these airline sites are:

  • 2.2.5 - "Remove all unnecessary functionality, such as scripts...": In my opinion many of these third party scripts absolutely fall in the scope of 'unnecessary functionality' for a payment page. If the script is not directly needed for successful processing of those payments, it doesn't belong on a payment page.
  • 3.2.2 - "Do not store the card verification code or value...": Those who have a generic key logger running on the credit card page would appear to be in violation of this requirement...
  • 4.1 - "Use strong cryptography and security protocols": Many of these third party hosts support older insecure versions of TLS/SSL and/or insecure ciphers. Even if they don't directly transmit cardholder data, this can potentially be used by unauthorized third parties to manipulate script content while in transit.
  • 6.3.2"Review custom code prior to release to production or customers" and 6.4 - "Follow change control processes...": How can you review code and follow change control procedures for code that is dynamically loaded by each end user system from third party controlled hosts? Without using SRI (which very few airlines do), there is no way to know when third party scripts/content has been modified or to block modified content from loading.
  • 6.5.1 - "Address common coding vulnerabilities in software-development processes..." / "...injection flaws...": This practice is by definition code injection from third party controlled systems.
  • 12.8 - "Maintain and implement policies and procedures to manage service providers": Does anyone at these companies even have an idea of how many third parties have direct or indirect access to their customers' payment data through inclusion of third party hosted scripts?

Examples

Without making this rant too long, I will just let a few screenshots talk for themselves.

The screenshots below are from the payment pages of a few well-known airlines, and on the right hand side of each one is the Chrome development tool window (which you can launch by pressing F12 while using Chrome) showing which scripts and other resources are loaded by the same page, and from which site/host it is loaded. I really wonder if all those third party sites are involved in the same airlines' PCI-DSS certification programme and audits, and if each of them are PCI compliant.

Alaska Airlines



Emirates



Etihad



Finnair



Is that a keylogger on your credit card form, Finnair, or are you just happy to see me?

What does PCI-DSS requirement 3.2.2 say about storing CVVs?


Frontier Airlines



Jetblue



Korean Air



Lufthansa



SAS



Silkair



...the list can go on and on, but I think that's enough examples of payment pages littered with excessive third party content.

None of the airlines above have a defined content security policy (CSP) to control which third party sites are valid sources for loading scripts and other content, and none of them use subresource integrity (SRI) to protect users from scripts that have been tampered with or otherwise modified while hosted on third party sites.

The next time you are going to pay for something online, press F12 to launch your web browser's developer tools, and you can see for yourself what third party resources are included in the payment page.

I will include just one more example, this time from an airline that has implemented their payment page in a more-clean-than-average way, with only a minimal set of scripts loaded from their own servers, Visa's servers, and from an Adobe server with a qatarairways.com hostname:

Qatar Airways



FAQ

What's the problem?

TL/DR: Some airline websites make excessive use of third party scripts/CSS/html hosted on third party sites/hosts not controlled by the website owner, which in turn make them exposed to potential vulnerabilities at those third party sites. In other words: they expose a larger than necessary attack surface. When this is done on payment pages, it increases the chance that they may leak their customers' credit card details to unauthorized third parties.

I'm responsible for an airline website that does this - what is the worst that could happen?

Someone: either an authorized rogue user at a third party organization, or an unauthorized person who have found a weakness or backdoor that can be used to make modifications to one of the third party hosted scripts (or CSS files) can modify one of the scripts in order to make it capture credit card data and funnel it elsewhere. When discovered, the credit card companies will invite you to pay stiff penalties for the breach if you want to continue processing credit card payments, and depending on where in the world you are located/based you may also be legally required to issue a breach notification. This will inevitably lead to negative publicity for your organization.

Has this ever caused a problem in the real world?

Yes, it has. Not too long ago, Delta had customer credit card data exposed by a third party script loaded on their site as part of a chat help tool:

What can I do as a site owner to limit the ways I expose user data to third parties?

  1. Use common sense and limit the number of third party resources you load from third party sites/hosts to the minimum set needed.
  2. Implement CSP to control which third party sites can be referenced from your site, and use SRI to block third party content from loading if it has been modified/tampered with.
  3. Involve your internal or trusted external technical resources in compliance reviews. Don't just rely on checklists and scan tools for e.g. PCI compliance.
  4. Make yourself aware of, and follow industry best practices to protect your (and your customers') data.
  5. Use common sense. See #1.

23 March 2017

Dear Emirates, your mobile app has a security problem

TL/DR - the Emirates mobile app included an empty override for the OS built-in X509TrustManager, making it accept any TLS/SSL certificate served by a man-in-the-middle, and thereby exposing customer data, credit cards, etc for mobile app users to third parties. That has been fixed now. See updates added at the bottom of the blog post for details.


Dear Emirates,


I fly with you a decent amount of miles every year. I do that because you provide good service, good food, good entertainment, all at a good price, and I sometimes need to fly to places that you fly to. I just got back home from a 6h+16h+16h+6h roundtrip on EK flights on your A380s, and I had fun all the way. In short, I am a happy Emirates customer.

Around a month ago, I received an email from you, pitching your mobile app, so I decided to give it a try. I think my habit of flying with you puts me within the demographics targeted by your mobile app, and I was hoping that the app would be useful to me.

Now, most users will just install a mobile app and use it if it provides a friendly enough user experience and user interface. However, I like to dig a bit deeper before I trust any mobile app. The underlying reason for that is that I think the mobile app industry is a bit of "wild west". Or maybe a better analogy would be "Vietnam war". A lot of companies desperately want to have a mobile app, and as a result they will hire any company to build one, regardless of if said company have the competence needed to build a good and secure app.

One of the first thing I checked after installing the Emirates app was if transport security was properly implemented. It turns out, it was not. At least not the Android app I installed on my not-too-ancient Samsung device. In short, transport security is a mechanism for protecting data by encrypting it as it travels over the internet often referred to as https, ssl, or tls, and is used to ensure that a web browser or app is getting data from and sending data to the correct servers and to protect that data from eavesdropping.

I immediately raised this to you through several channels: email, twitter direct messages, the app's feedback form, and public tweets. However, it seems like my feedback didn't make it to the right people within Emirates. I know you have a lot of competent people working for you in a variety of functions, and I hope that applies to information security as well.

Do you have an inhouse infosec department? If so, please have them review your mobile app. They will understand what this is all about.

A more detailed explanation of the problem your app suffer from follows. I will try to explain this in not-too-technical layman terms to make it easier to understand for non-technical people.

Whenever you access a website serving or consuming data that need to be protected, that is usually done over a secure protocol a.k.a transport security, sometimes referred to "SSL" (now deprecated), or TLS. When a web address starts with "https", the web browser you use will connect to the web server using a (usually) secure encrypted protocol. One key part of establishing this connection is verifying the identity of the server you're connecting to. This is done using something called a "certificate", "SSL certificate", or "TLS certificate". A certificate is usually signed by a third party, a "certificate authority" or CA. Their cryptographic signature guarantees that a web server or app server belongs to the domain it claims to belong to.

Your emirates.com website has a certificate, and likewise does your mobileapp.emirates.com server which provide your mobile app with access to your reservation system, your CRM system, to Boxever, etc.

If you open the emirates.com website in a web browser, you will see that the address bar shows a padlock icon, and that the address has a "https" prefix. This means that the connection is secure, and that the certificate presented by the site is valid and signed by a CA trusted by the user's browser and device.



Now, if a bad actor would try to intercept traffic between a user's web browser and your website, they could try to do so using a man in the middle (MITM) proxy. In short, this is a program running on a system somewhere between the user and the web server they're trying to access, but it terminates the secure connection and reestablishes a new secure connection to the server. This allows it to read all communication between a browser and a web server. However, this is only possible if the web browser, app, and device used will trust the certificate served by the MITM proxy. If not, the web browser will show a warning to indicate that a secure connection could not be established.

MITM proxies are not uncommon on corporate networks, on public wifi networks such as the ones available to hotel guests, at airports, in coffee shops, etc. A correctly configured device, with a modern browser or correctly implemented app using transport security will normally warn a user if a MITM proxy is trying to intercept secure connections. For unsecure connections, the user is generally unaware that their traffic is getting intercepted.

If a user tries to access the emirates.com website from a mobile through a MITM proxy using a self-signed certificate, or a certificate signed by a certificate authority that is not trusted by the device, the web browser will display a certificate validation warning that looks something like this:



The same thing applies to mobile apps: if a mobile app need to communicate with a backend system somewhere on the internet, that should be done over a secure protocol. This is usually done using the same https (TLS) protocols used by web browsers, with the only difference being that apps does it "behind the scenes" without showing the user an address bar. The Emirates mobile app communicates with servers such as mobileapp.emirates.com, boxever.com (for tracking user behavior), and a few others.

Some mobile apps do this in a correctly implemented way; they validate the certificate presented by the backend server, and ensures that it was issued by a certificate authority trusted by the device the app is running on. Some don't. Your mobile app is one of those that will trust any certificate, regardless of if the issuer is trusted by the device or not, including self-signed certificates.

The following screenshots shows how Google Maps, Google Play Store, and Uber react if they are served an invalid certificate; those apps will show the user a warning and will not attempt to communicate with a server whose identity appear invalid or can't be verified:


What is shown in the screenshots above is the correct way for an app to act if it is served an invalid certificate.

The Emirates app however, will under the same circumstances, when connecting through a MITM proxy serving an invalid certificate, happily continue working without showing the user that anything is wrong:


Behind the scenes, the app establishes TLS connections without validating certificates, and exchanges information that should have been protected. It does so even if all communications goes through a MITM proxy, and it does so even if that MITM proxy serve invalid certificates. Some of the data in your "skywardsDomainObject" data even expose things that you probably want to keep internal to your CRM system, things like Emirates staff user names for internal systems, Skywards "member score", passenger importance rating, downgrade/upgrade eligibility/non-eligibility, etc. In the screenshot below, among other details you can see that my "memberScore" value is 05.3228, and my account was at one point updated by a user with the login "CRISOPS", and my "impRating" is "NIP", etc.




Dear Emirates, this is not good. This is especially bad if an app handle (and exchange) sensitive data about a user, things like their travel plans, travel history, travel documents, credit cards, etc. Your app handle that kind of data, data that should be protected when it is transmitted over the internet.

The credit card industry has a standard called PCI-DSS that was created to ensure that companies accepting or processing credit card payments protect card holder data in an appropriate manner. One of the requirements of PCI-DSS specificially state that server certificates must be validated before sending payment/card data over the internet. Dear Emirates, your app fail on complying with this requirement, and shouldn't in its' current state be trusted with credit card details.



As mentioned, I have tried to raise this to Emirates through several channels over the course of a month, but seemingly without success. This is another attempt. Hopefully this blogpost can make its' way to the Emirates infosec team, so that the app can be fixed and that I and others can use it without exposing our data to unknown third parties.

Meanwhile, if noone on your side read the feedback submitted through the app's own feedback form, you may as well remove that feedback form while you're at it:


Dear Emirates, you have my contact details. If you need any more information about this, please reach out. I want to be able to use your app, but I can't do that until you fix this (and a few other) bugs or implementation flaws. I am happy to provide your team with additional details on what they need to do in order to make your app as good as your inflight service.

Update #1

Update #1: It looks like this behavior was implemented intentionally. Hey, Emirates and Tigerspike, if you want to fix this, have a look at this...


Update #2, 26 March 2017:

The original content of this blog post will be returned, after the issues raised in it has been corrected by Emirates. They are working on a fix, and a corrected version will probably be available within a few days.

In the meantime, if you are a user of the Emirates app for Android, and if the version you have installed has a version number 2.7.0 or less (or released prior to 26 March 2017), I would recommend that you do the following:
1) Uninstall the app temporarily. Reinstall it after a new version has been released by the Emirates/Tigerspike team.
2) Log in to your Skywards account on emirates.com, click on "account settings", and change your password.
2b) If you have used the same password elsewhere, change it on all sites/apps/etc where you have used it. ...and stop reusing passwords across sites...
3) If you have paid for a booking through the mobile app while connected over public wifi at a hotel, airport, coffee shop, or similar: ask your credit card issuer to issue a new replacement card with a new PAN/account number for you.


Update #3, 2 April 2017:

It looks like Emirates have released an updated version of the app, dated 29 March and available through Google Play store on 2 April, with a fix for the 'missing' (overridden) certificate validation.

Some other minor issues raised below (CRM data leaks, etc) remain, but it is my understanding that they are working on those as well. Since the main issue originally raised has been addressed, the original contents of this blog post have now been restored, after being offline while EK worked on a fix (read: removed the few lines of code I pointed out in 'update #1'). Hopefully someone else will find this useful if they're planning to build a mobile app for their customers.

I have not yet seen any official communication from EK to app users regarding this issue, but I do hope that they will at some point contact their users with information on what happened, and steps users need to follow to protect accounts that have or may have been exposed to third parties as a result of this. In the meantime, if you have used the app, see "update 2".

I still do not have contact details for the EK infosec team, but I hope they will learn from this and make it easier to contact the people who are responsible for handling this kind of issues. Clearly, the usual contact channels are not suitable for reporting infosec issues of a technical nature.

Finally, I think some of the good advice raised by Troy Hunt in one of his recent blog posts would apply here as well, especially when it comes to making it easy to report security related bugs and ensure those reports reach the right people:
  • Make it easy to submit security reports
  • Treat security reports with urgency
  • You must disclose
  • Disclose early
  • Protect accounts immediately
  • Avoid misdirection and false or misleading statements
  • Don't be vague
  • Explain what actually happened
  • Keep customers updated
  • Apologise
See:
https://www.troyhunt.com/data-breach-disclosure-101-how-to-succeed-after-youve-failed/


Update #4, 18 April 2017:


Still no communication from EK to app users. I guess they decided to file this one under "Let's pretend it never happened."

30 September 2016

Why do some companies use so many different domains?

One thing that I find confusing is that some companies use a whole lot of different domains for different but related services and products. Depending on their line of business, this can sometimes make it tricky to know: am I communicating with the right company or with someone else? In the case of online merchants, banks, credit card issuers, etc this can get even more confusing and makes users more vulnerable to phishing and other deceptive practices.

Below are examples of domains used by two local banks here in Thailand, UOB and SCB. UOB has actually had a few more "phishing style" domains in the past, but they have retired e.g. "uobcyberbanking.com" and a few others.



The examples above are actually all legitimate domains, owned by respective bank. However, for a customer or end user, it can be very difficult to know which domains/sites are legitimate, and to distinguish legitimate domains from a fake phishing domain pointing to a spoofed website when so many different domain names are used in parallel.

I was wondering if even the banks' own staff can spot the difference, so I made an experiment. In the image above, there is an "scbcreditcard.com" domain that belong to the bank. A quick check with a domain registrar revealed that the [possibly better named] domain scbcreditcards.com was up for grabs for a few dollars.



Would an employee at the bank, say a customer service representative, know which one of the domains scbcreditcard.com and scbcreditcards.com is fake and which one is real?

I registered the domain scbcreditcards.com, and simply made it redirect to the bank's real site. I then sent off a baited question to the bank on Twitter:





The bank eventually replied, but the reply was even more confused. I guess the person operating their twitter account doesn't even know what an internet domain name is, because they replied that only scb.co.th is the only domain name they use.

This is clearly not the case, as they in fact use many more domains as seen in one of my screenshots above. Had this been true, that they only used scb.co.th, that would have been good, and I wouldn't have written this blog post in the first place.

Regardless of the bank's confused answer, it is incredibly difficult for me and other customers [of banks and other companies] to spot a tiny difference like that, especially when so many different domains are used by the same company for their different online services.

In this case, at first I made the newly registered domain scbcreditcards.com redirect to the bank's own (legit) site scbcreditcard.com, but I could have pointed it anywhere, as phishers and other scammers do. I later redirected it to this page, and finally to HIBP.

Being in control of the domain scbcreditcards.com also means I can buy an SSL certificate for it. Just for the sake of testing/demonstrating this in action, I spent another $10 for a DV certificate for the same domain. I wonder if the CA has enough checks in place to catch this...

Note: the bank's legitimate site at http://www.scbcreditcard.com doesn't even support https in the first place, which is a bit weak for a site in any way affiliated with a credit card issuer. Even if the site doesn't provide any access to cardholder data, I would expect a site like that to do https only, with HSTS.

I think it would make a lot of sense for companies to stick to one main domain, and if needed use subdomains under that. If all UOB's services was under "uob.co.th", and all SCB's services were under scb.co.th then it would immediately be more difficult for phishers to set up fake websites under spoof domains.

In the meantime, consumers will have to try to figure out on their own whether a website they're accessing is legitimate or not, and some will continue to fall for spoof/fake/phishing sites. Companies that set up a new domain for every department/product/service is partially to blame when their customers get tricked; it is simply not possible for end users and consumers to spot the difference between a legitimate site and a fake site when the same company use 5 different domains for closely related services.

Does your company have too many different domains? Why? Would it make sense to consolidate them?

02 July 2015

Do you know which CAs can issue SSL/TLS certificates trusted by your PC or phone?

Most PC and phone users are blissfully unaware that their PC or phone have a very long list of trusted root CAs, certificate authorities that can sign SSL/TLS or code signing certificates that will be accepted at face value. Those root CA lists are regularly updated, most recently all Windows PCs silently got a bunch of new trusted root CAs from the Chinese government, India CCA, etc.

In other words: a few hundred organizations that you have probably never heard of, and a few thousand organizations trusted by them, can issue certificates that is trusted by your web browser, mail client, and used for signing software. Any of them can issue a SSL certificate for any web property, and sometimes they do issue certificates to the wrong party. When a certificate is issued to someone else than the legitimate site-owner, it opens up for man-in-the-middle attacks where an unknown third-party can intercept and modify communication between a web browser and web server.

To reduce the risk of getting man-in-the-middle'd by someone who got a certificate from one of those CAs (or the thousands of intermediates trusted by them), it is a good idea to regularly trim the list of trusted root CAs on your PC or phone so only the ones you really need are trusted. It is relatively easy to update the list of trusted root CAs on a PC or phone, the following two infographics shows how to trim trusted root CAs on Windows and Android, respectively.

Windows


Android




29 June 2015

RandomCards: a simple low-tech method for managing strong passwords

Passwords, always a problem


Everyone with access to a computer has problems with passwords. Repeatedly, whenever some website is hacked we're reminded that the majority of all people use weak passwords like "P@ssw0rd1" or their dog's name. Whenever some website is hacked, users with weak passwords or those who reuse their passwords are the first victims, and often have their data compromised at other systems than the ones that got hacked.

Others try to use strong passwords, but the elements that make a password strong such as length, entropy (randomness), and not being based on a dictionary word in the first place usually make strong passwords very hard to memorize. This is often overcome by using a password manager, or by simply writing the passwords down in a file or on paper.

Unfortunately, password managers are sometimes compromised too, for good reasons. Why would hackers not put extra effort into breaking into a system where thousands of users store all their credentials? They do, and as a result it happens that even top-of-the-line password managers are sometimes breached.

Good password managers are designed to make life harder for hackers by employing layer after layer of strong encryption, yet if a user's master password is not strong enough that user is at risk of having all their passwords leaked anyway.

Writing passwords down in a file or on a paper is equally insecure, as has also been shown repeatedly. When the French TV channel TV5 Monde was hacked recently, it turned out that not only did they use a combination of very weak passwords such as "azerty12345" ("qwerty12345"), and "lemotdepassedeyoutube" ("password for youtube"), but they also posted them on post-it notes all over their offices.

Of course, they're not alone on using post-it notes for passwords:
Personally, I am not a fan of password managers, especially the "online"/cloud based ones where you store all your credentials in one central location, trusting a third party to ensure that unauthorized users can't access them.

I don't claim to have a solution to any of the issues surrounding passwords, but being sceptical of password managers and still wanting to use sufficiently strong passwords myself, I have put together an experimental app for generating and printing pocket-sized cards with random content that can be used to derive passwords, "RandomCards".

RandomCards


RandomCards is a small app that will generate large random numbers using cryptographic random number generators, convert them to printable/human readable characters (Base64), and print out 10 wallet sized cards with random characters on a sheet of A4 or Letter paper.



The RandomCards app has a fairly simple user interface: choose which RNG (random number generator) you want to use, how many pages of RandomCards you want to print (with 10 cards on each page), hit the "Print" button, select target printer, and it will print out your cards. Each card has a small unique icon to distinguish it from your other cards, so you can keep a stack of them together and still be able to distinguish the cards from each other.


The list of random number generators available in the app depends on which RNGs you have on your system. On a baseline windows system with no TPM, you may see only Microsoft's CSPs. If you have a TPM ("trusted platform module") installed, you should be able to use the TPM's hardware-based random number generator. The default option is "All Available RNGs", which will generate random numbers using all installed RNGs and XOR them together. This should result in a random sequence at least as strong as the strongest RNG, regardless of if any of the other RNGs are weakened/predictable.




Although this can resemble some kind of "post-it notes on steroids" password manager, the idea is that these cards contain enough entropy to be used for strong passwords, and since you can read them in any direction you want they are much more difficult for an attacker to figure out your password if you lose them than an ordinary password note or file.

Print, laminate, and keep a sufficient number of cards in your wallet. The cards are wallet sized for a good reason, and if you make up your own technique for reading them ("red pineapple card, start at J5, read diagonally up for 18 characters is for xyz.com") then they're going to provide you with strong passwords without having to memorize a full long random password, while making no sense to someone else if you lose your cards.

Change around your RandomCard printed cards, pick a starting point that you can memorize, pick an arbitrary reading direction (up, down, left, right, diagonally, diagonally pairwise, zig-zag [up/down/ltr/rtl] etc), pick an arbitrary password length (12 characters or longer), and each card offer a very large number of combinations of fairly strong* passwords.

* = Remember, since the random data on the RandomCar cards is base64 encoded, every 3 characters of a RandomCard password correspond to 2 bytes or 16 bits of entropy, so a 12 character string from one of these cards are equivalent to 8 bytes or 64 bits of entropy, or 1 in 18,446,744,073,709,551,616 for someone who have no access to your password cards.


Download the app (or source code)


If you want to try out or use RandomCards, you can download the app from https://apps.huagati.com/download/RandomCardsApp.zip, or the source code for it from https://apps.huagati.com/download/RandomCardsSource.zip

The app requires a PC with .net 4.0, sufficient user privileges to use the random number generators installed on the system, and a printer.

As always: provided as-is. No warranties (expressed or implied). Use at your own risk. Batteries not included.

Feedback, comments, questions? Post it in the comments section below.

18 May 2015

IKEA shows how NOT to do passwords...

A few weeks ago, I took my family to the local IKEA store here in Bangkok to pick up a few pieces of furniture. I am generally not a fan of loyalty programs offered by shops/banks/airlines/etc, but I made an exception and joined IKEA's "IKEA family" program to see if I would get any discount* on the items I purchased. (* = Nope, I didn't.)

When I got home from an out-of-town trip yesterday, there was a letter from IKEA containing a welcome letter and my member card. The first thing that caught my eye was the third line in the welcome letter: "Your login password: Your date of Birth (DDMMYYYY)". My WHAT? That doesn't seem very secure, does it?



I opened up my browser and went to their site to have a closer look. Right on the login page was a password reminder link, which I clicked. That opened up a message box confirming that they do indeed use your date of birth as a password, but even worse: the wording of that password reminder even suggests that you can't even change your password later. After logging in I couldn't find a way to change the password or DOB, so I think you're stuck with your DOB as the password for your "IKEA family" account...



What's wrong with using your date of birth as a password?


Why is this bad, you say? Not everyone knows my date of birth, right? Well, unfortunately, it is very easy for a computer to test all possible combinations of someone's date of birth and make automated requests to login pages like the one used by "IKEA family". There are after all only 36525 possible date combinations in a 100 year timespan. If we assume that most "IKEA family" members are between 17 and 85 years old, that drops to 24837 combinations. That is way to easy to bypass, and in a real-world attack each member account would (on average) require about half as many attempts before the correct DOB is found: just 12k requests per member account. This can be done in a very short timespan (seconds) by your average home computer or smartphone.

Now, someone may argue that this is the password for a membership account with a 16-digit membership number, a membership number which would be hard for someone else to guess. That may be the case, it looks to me like the membership number starts with a 999320 prefix, followed by zeroes, and then a 6-digit membership number. Based on how the number is formatted, I would guess is that those membership numbers are issued in sequential order, which would make it easy to automate a brute-force attack. An attacker could start at 9993 2000 0010 0000 and work his/her way up through the account list.

An automated brute-force attack would probably need to make somewhere between 5-8 billion requests to the "IKEA family" site to retrieve all members' data. This may sound like a lot, but for a computer it is not very hard work at all to make a few billion http roundtrips over the span of a few days...

HTTP only


As an added bonus: the entire site, including the login page, use plaintext http instead of https. Whenever you access a http-only site from an open wifi-connection or a compromized network you are sharing your information with whoever may be listening in.

What's at risk?


IKEA family is just a loyalty program, where you can collect bonus points and get discounts on items in their stores. Fortunately, there doesn't seem to be a way to tie a credit card or bank account to it [yet], [in this country].

What is the risk if someone compromise an IKEA family member account? PII: Personally Identifiable Information. When you sign up for an IKEA family membership, they ask for your name, address, email, DOB, ID card or passport number, mobile phone number, family details etc. I shared that information with IKEA, but I may not necessarily want to share it with a hacker in China, or Russia, or elsewhere. Likewise, IKEA may not want to share their customer data with hackers who may use it for phishing, or even resell it to competitors.

I immediately updated my profile and changed name/address/etc to dummy data, and I will email IKEA in a short while and ask them to delete my "IKEA family" account until they handle my (and other members') information in a more responsible way. Maybe I will even join "IKEA family" again in the future, if they become more responsible with how they handle member data.



In addition to accessing your PII, the site also allow you to redeem bonus points and to review transaction history (including previous purchases at IKEA stores).



I had a quick look at the login pages for "IKEA family" sites in other countries, and it looks like the IKEA family program's website is implemented differently in different countries. The IKEA family sites in nearby Singapore and Malaysia appear to be identical to the one used by IKEA Thailand, while the one used by IKEA Sweden appear to be a bit more secure.


Dear IKEA, ...


If anyone from IKEA happens to come across this, please have a look at how the online version of your "IKEA family" loyalty site is implemented in some countries. You are making your membership data easily accessible to hackers and (potential) evil-minded competitors.

If whoever built the "IKEA family" site is this sloppy with passwords, there may of course be other weaknesses as well. If you change the way you handle authentication, you may also want to spend a bit of time on looking into other security aspects of your site.



Passwords


To everyone else: your date of birth is not a good password. Neither is your grandmother's date of birth, your dog's maiden name, or "p@ssw0rd69". Don't do it, especially if you are using it to protect other people's PII. If a site you are using insist on using a weak/bad password, reconsider if you really want/need to use that site and limit what information you share with it.