The Importance of Cloud Security, Common Vulnerabilities, and Best Security Practices

The Importance of Cloud Security, Common Vulnerabilities, and Best Security Practices

Author: William Khem-Marquez (CSSA Research Team Member)

Over the past few years, many enterprises have been migrating their systems to cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. Cloud-based infrastructure has opened the door to developers looking to build scalable and resource-heavy applications without the burden of purchasing, maintaining, and running all of the required hardware in-house. An additional benefit of the cloud is the ease of data organization; long gone are the days of manual backups, convoluted file structures, and lack of access control visibility. Cloud providers excel at improving these. Because of all these improvements, many companies have begun hosting sensitive data, such as databases or financial data, on the cloud. In turn, developers must ensure that they are implementing the correct measures to protect this data. This has propelled the concept of cloud security to become one of the most important topics that any DevOps engineer must familiarize themselves with. 

What is cloud security?

In brief, cloud security is a set of preventative measures taken to secure data, applications, and infrastructure hosted on the cloud from potential cyber attacks. Fortunately, most of the big-name cloud providers already have systems in place to mitigate risk, and the majority of successful cyber attacks against cloud applications are a result of human error. This article will serve as a summary of a few of the most common security threats against a cloud application, and how to mitigate them.

Some Common Vulnerabilities

1. Leaked keys!

Application programming interface (API) keys, Secure Shell Protocol (SSH) keys, Storage Keys. All of these are integral parts of granting authorization to protected resources when creating large-scale programs. Some keys are designed to be given to users simply as a means of identification. However, more commonly used are keys that are meant to be used internally to verify that the company/software provider has requested data. These keys can grant administrator-level authorization to all server resources. It is imperative that these high-privilege keys should be given only to authorized personnel (e.g., internal developers) and otherwise kept out of reach from the public in production. If these keys are compromised, it could potentially grant an attacker remote access to classified files containing usernames, passwords, or employee and financial data. They could also potentially upload and execute ransomware or other malware onto your platform.

Unfortunately, developers often make the mistake of including these private keys when uploading source code to public repositories. Research conducted in 2017 by North Carolina State University (NCSU) uncovered 201,642 unique API and cryptographic keys spread over 100,000+ public repositories. What’s even more concerning is that they had only searched through around 13% of all GitHub repositories at the time, and only for a select few popular key formats. Who knows how many more keys are out there today ready to be compromised with a simple google search? For those interested in learning more about how API keys are typically leaked in online source code, this repository documents many examples.

Another mistake, albeit less obvious, is the insecure integration of API keys into desktop or mobile applications. If your application sends requests that include an API key, it’s vulnerable to network sniffing tools. Web debugging tools such as Fiddler, Charles Proxy, and Burp Suite can set up a man-in-the-middle (MITM) scenario, allowing an adversary to view all the data from your network requests in cleartext including endpoints, request headers, request parameters, API keys, and more. Even if an encryption mechanism is implemented to obfuscate requests, anyone with sufficient reverse engineering knowledge can decompile your application and reverse your obfuscation to recover the original content. Some developers may advocate for SSL/certificate pinning as a means of preventing MITM-based analysis, but these measures are easily circumvented with free software like Frida. Exactly how to bypass these protections is outside the scope of this article, but more about this can be found with a quick google search or may be published in a future CSSA blog post. 

Best Practices for Securing API keys

When pushing source code to a public repository, make sure that all private API keys are removed. It’s also recommended to create private repositories for internal access only, or make use of the “.gitignore” file to keep confidential files away from the public eye.

Ultimately, it’s best to assume that any string stored on the client-side can be recovered. For that reason, never store any private API keys or directly query sensitive endpoints on the client side. A far better approach is to implement a controller in your server backend to handle operations that require querying private endpoints/using API keys. The client can then call this controller whenever an API call needs to be made, without the risk of leaking sensitive information.

2. Cloud Misconfigurations

Cloud-computing providers provide a multitude of built-in tools for organizing and visualizing access control for networks and resources. Yet, misconfiguration of these settings remains the most common cause behind cloud security breaches today. A study by DivvyCloud discovered that from 2018 to 2019, cloud misconfigurations cost companies an estimated 5 trillion USD in damages globally.

Nearly all instances of cloud misconfiguration-related vulnerabilities are caused by human error, meaning that they’re entirely avoidable. When a new cloud environment is first initialized, default configurations typically permit access to all resources from anyone. Imagine a scenario where a developer decides to leave these lax settings as is to make the development phase easier. Perhaps this team sets up insecure, but “temporary” common credentials (ex. admin:password) for SSH or RDP logins (or even no credentials at all). At production time, the team completely forgets to implement proper access control before deploying it to the internet. Then, in the span of just a few minutes, malicious actors discover and exploit the vulnerabilities, stealing data or compromising the entire service with ransomware before it even got its first public user.

Now, you may be skeptical – “a few minutes from the first deployment to fully compromised? Seems a tad bit unrealistic…”. If that describes you, then you’d be surprised to learn that this actually happens in the real world all the time. Researchers at Palo Alto Networks set up honeypots (deliberately misconfigured applications) to attract and study the operations of threat actors. The following stats are directly quoted from the publication:

  • 80% of the 320 honeypots were compromised within 24 hours and all of the honeypots were compromised within a week.”
  • “The most attacked SSH honeypot was compromised 169 times in a single day.”
  • “One threat actor compromised 96% of our 80 Postgres honeypots globally within 30 seconds.”

The truth is, cloud misconfigurations can lead to incredibly devastating consequences for businesses. Yet, it has become terrifyingly easy for hackers to spot and leverage these vulnerabilities to hijack systems. There’s so much documentation on common misconfigurations available on the internet. Threat actors don’t even need to be particularly knowledgeable to take advantage of these flaws, either. With the rise of open-source automated vulnerability scanners, even a script kiddie can break into insecure systems by running a few commands and at no cost. Oftentimes, threat actors may not even be looking to target your service in particular. Rather, they’ll just be running automated tools against as many services as they can, hoping that one has a vulnerability. This is known as a non-targeted cyber attack, as opposed to targeted attacks where attackers spend time researching and planning the attack beforehand.

Cloud misconfigurations can result in a broad range of attack types. There are far too many to cover them all in-depth in this article, but I’ve included a summary of some of the most common ones below:

  1. S3 Bucket Misconfigurations:

AWS’s Simple Storage Service (S3), as its name suggests, can be used to store files and data critical to your service on the cloud.  But failure to configure them properly can be problematic. A popular example is allowing public read or write access to private files. This was the case in the 2017 breach of Booz Allen Hamilton, a US government-contracted company that exposed classified credentials and data related to the Department of Defense. I’d recommend referring to this article by CloudAnix as a reference for common S3 Bucket misconfigurations.

  1. Privilege Escalation:

When a malicious user exploits a bug to elevate permissions to access resources that are intended to be unavailable to their user level, it’s known as privilege escalation. Sometimes, it’s caused by compromised credentials leading to account takeovers. Other times, it may be directly related to the improper configuration of user access control settings. There are two main classifications of privilege escalation: vertical and horizontal. Vertical escalation occurs when a threat actor gains access to the privileges of a user with higher authorization (such as a system admin or developer). They would then be able to steal classified data, execute administrative commands, set up backdoors/reverse shells, or install various types of malware. On the other hand, horizontal privilege escalation is when a threat actor is able to access resources owned by another user of the same authorization level. This includes private messages, emails, addresses, phone numbers, contacts, payment info, previous orders, and anything else the original owner has access to. 

  1. Server-Side Request Forgery:

A server-side request forgery (SSRF) attack occurs when an attacker exploits application functionality to force the backend to make requests to an arbitrary domain. This need not be an external domain. Instead, it’s most lethal when the request can be routed internally to the server hosting the application (localhost/127.0.0.1) address, or a related company network that the server is permitted to access, but would otherwise be restricted to outsiders. A successful SSRF attack could let attackers read/change files hosted on the server, or even remotely execute code – though we won’t get into the nuances of all the possibilities today since they aren’t cloud-specific.

How to Mitigate the Risk of Cloud Misconfigurations

As discussed previously, cloud misconfigurations are caused by human error and are entirely avoidable. Alas, because the cloud offers so many extra configuration settings, they can be significantly more challenging for developers to manage relative to traditional in-house hosting. There’s a famous saying in the cybersecurity community, “complexity is the enemy of security”. The problem isn’t always going to be a negligent developer that’s lazy to implement proper access control. More likely, it’s due to developers being unaware of potential consequences. Too often, developers put so much focus on delivering the project as soon as possible that the topic of security takes a backseat. For that reason, it’s always best to ensure that your developers are educated and are on the same page regarding proper cloud security measures. Or, you could hire an internal or external dedicated cybersecurity team to do audits before environment deployment.

Remember how we said before that automated tools can make things easier for attackers? Well, those same automated tools can be used during the development cycle as internal security assessments to detect and patch misconfigurations too! The GitHub repository that was linked previously also includes a multitude of proactive and reactive defensive/forensic resources that you could use to secure your cloud application.

Interactive Cloud Exploitation/Defensive Practice

For anyone interested in offensive penetration testing, or developers wanting to better understand how attackers exploit misconfigurations, flaws.cloud is an awesome and interactive challenge where you can (legally) learn to exploit a real-world AWS application. Flaws2.cloud is similar but includes an additional defender path to hone your incident response skills by reacting to the attack from the victim’s perspective.

Other Cloud Security Practices You Should Know

The following security measures aren’t directly related to the vulnerabilities we have already outlined. However, these are essential principles for any cloud developer to familiarize themselves with to make their platform as secure as possible. 

  1. Encrypting Data at Rest and in Transit

Encryption is the process of taking cleartext data and transforming it with an algorithm to produce an output (ciphertext) with little to no resemblance to the original data. The encryption method can be public, but the key used to transform it back to its original state (decryption) must be kept secret.

Data at rest refers to data stored on the cloud not currently being used. Encrypting it in this state is akin to storing your data in a vault. Encrypting individual files or the entire storage container would stop an attacker from reading the contents of the files, even if they had breached the filesystem. 

Data in transit refers to data being sent over the internet, such as emails, credentials, or other requests. Encrypting it in this state is akin to sending a package off in an armoured vehicle. This prevents any adversaries from trying to intercept and steal or modify the data before it reaches its intended destination. For example, if you’re offering a software-as-a-service (SAAS) application, you may want to use secure protocols (FTPS/HTTPS/TLS/SSL) over their unencrypted counterparts (FTP/HTTP) when transporting data.

  1. Use Firewalls and Intrusion Protection Systems

Firewalls are usually provided by the cloud service provider and are powerful when used correctly. You can configure them to whitelist company networks and block external traffic to ensure only authorized personnel can access data on your network. Intrusion protection systems work alongside a firewall to detect and report malicious activity from bad bots. They can automatically blacklist IP addresses involved in brute-force attacks or drop requests containing known malicious payloads or signatures. Once detected, they can alert an administrator of any anomalies in network traffic. Aside from alerting an administrator, an IPS can also be configured to redirect malicious traffic to a honeypot containing decoy data, distracting the attackers and allowing administrators to study their methods in a sandboxed dummy environment.

  1. Implement Multi-factor Authentication 

What if an employee falls for a classic phishing email? Maybe another gets their computer infected with a keylogger. Not everyone in a company practices password sanitization – what if their credentials are leaked in a database breach? What if someone uses an easy-to-guess password? The bottom line is, we can’t always trust a username and password combination to verify someone’s identity. Multi-factor Authentication (MFA) adds an additional layer of account protection by requiring an employee to provide a one-time passcode (OTP) sent via email, SMS, or authenticator app every login attempt in addition to their credentials. MFA will impede account hijacking attempts, enhancing the overall security of your organization.

  1. Backing up Your Data Often

Most cloud service providers provide automatic data backups. Make sure to take full advantage of this and create backups often – at least once a day. It’s also a good idea to store these backups in a separate, secure location. In general, you should look to back up any data that isn’t replaceable if lost, such as financial data, emails, user databases, registry files, and source code. In the event of a ransomware attack, you won’t need to pay hundreds of thousands of dollars to cybercriminals with no guarantee of actual restoration.

  1. Log everything!

Collecting quality logs is a fundamental aspect of cloud security. It offers enhanced visibility of activities across the entire organization. Logs are especially useful in identifying malicious activity, both proactively and retrospectively. Regular logging can give a baseline context for what activities are normal. This data can then be leveraged to identify abnormal increases in requests to an endpoint or failed login attempts, letting the IT department identify the most common threat vectors against their service. They can then mitigate breach attempts before they are successful. In the event of a security breach, logs can be used for forensic analysis to reveal the exploited vulnerability, what resources were accessed, and information about the attackers. Logs are also useful in monitoring internal employee actions, to track and stop rogue employees or shadow IT actions. Insufficient logging and monitoring can result in failure to respond to an incident or breach in a timely manner.