PVS-Studio is a static analyzer that allows to find many problems hidden in the source code. Among them there are also errors related to application security. For example, the analyzer has recently learned to identify the presence of confidential data such as passwords in the code. The OWASP Top Ten list includes this potential vulnerability. It is much more dangerous than it may seem at first glance. What makes it so dangerous? How can a static analyzer save us from it? That's what you'll know about (and more) in this article!

We are continuously developing PVS-Studio as a SAST solution. We plan to improve the analyzer skills in finding even more security-related errors in C, C++, C#, and Java code. You can read up on these plans in detail (and find more information) in PVS-Studio Roadmap 2021.

Sensitive data stored in your code

One of the options for SAST developing support is the addition of some new diagnostic rules. They implement code compliance checks for various standards. A check for sensitive data in source code is among the latest innovations in the C# analyzer. The storage of such data in code is contrary to #2.10.4 of the OWASP Application Security Verification Standard (ASVS):

Verify passwords, integrations with databases and third-party systems, seeds and internal secrets, and API keys are managed securely and not included in the source code or stored within source code repositories. Such storage SHOULD resist offline attacks. The use of a secure software key store (L1), hardware TPM, or an HSM (L3) is recommended for password storage.

The OWASP Top Ten list includes risks related to insecure storage of sensitive data in code. The Common Weakness Enumeration (CWE) also contains 2 items related to this question: CWE-798 and CWE-259. Even so, one might wonder – why is it dangerous?

For open source projects, the answer is obvious. Everyone can view and use a password or some other data in code. That's an easy task for an attacker. They just delve into the repository to get some data.

The situation is slightly better if the application is available only in a compiled form. This can even create the illusion of security. After all, the source code is seemingly unavailable, which means that the data in it is also unavailable. Alas, that's not necessarily the case.

In practice, it's not uncommon for a system to contain hardcoded data that can be used to obtain various rights. As a rule, users can't even change this data. Attackers may use different methods to obtain them. In some cases, the system interface can generally contain logins, passwords, etc. In other cases, you'll need to examine various files, code decompilation, brute force, and so on. Anyway, malicious hackers are good at finding ways to uncover hardcoded secrets.

Quite often, the following problem comes up: an attacker, having received logins and/or passwords stored in the system sources, will be able to use them to connect to other systems of this type. For example, they can install the system locally. After scanning and obtaining users' logins and passwords for this local version, the attacker will be able to connect to other versions using the same data.

In addition, the fact that all programmers have access to the source code data poses a danger. At the same time, a user who has installed a particular system for their own needs won't be happy to know that a software company can get full control over the used system at any time. Therefore, the company will get various secret data of the users themselves, etc. The vulnerabilities found in the Common Vulnerabilities and Exposures (CVE) list show that such errors are found sooner or later. And at the same time, they are surely put on display.

As mentioned earlier, vulnerabilities related to hardcoded confidential data are pretty common. There are many examples among CVEs. CVE-2012-5862 is one of them. This record describes the system, containing the login.php file. There is the following code in this file:

$password = mysql_escape_string($_POST['password']);

if (crypt($password,salt)=='satIZufhIrUfk'){
  $sql_pthr_ = "SELECT user,password FROM account WHERE livello = 0";
  ....
}

if ($password=='astridservice' and $stilecustumization=='astrid'){ // <=
  ....
}

if (crypt($password,salt)=='saF8bay.tvfOk'){
  $sql_insert="INSERT INTO account(user,password,livello,nome) VALUES  
               ('sinapsi','sinapsi','0','Amministratore Sinapsi')";
  ....
}

In this code, there's a place where the variable containing the password, passed by the user, is directly compared with a string literal. Obviously, an attacker will have no trouble using this information. It will help the intruder to perform various operations unavailable to an ordinary user.

The PVS-Studio C# analyzer finds the storage of sensitive data using the V5601 diagnostic rule. Take a look at the C# code sample that resembles the example above:

string password = request.GetPostValue("password");
....
if (password == "astridservice" && stilecustomization == "astrid") 
....

After reviewing this code, PVS-Studio will emit the following warning:

V5601 Suspicious string literal could be a password: 'astridservice'. Storing credentials inside source code can lead to security issues.

Thus, the static analyzer will help you to find a similar error in the code at a moment's notice. Then, you just need to resolve the error. Therefore, the security level of your project will increase.

Note. It is worth mentioning that V5601 belongs to the OWASP diagnostics group. This group will appear in PVS-Studio with the release of 7.12 version. By default, OWASP rules will be disabled. However, you can easily change this, for example, with the help of the Visual Studio plugin or Rider. You can also directly edit the settings file.

This example is just one of many. Hardcoded data can lead to all sorts of problems. During my research, I found many other CVE records, related to highly protected confidential data. Here are the links to some of them:

  • CVE-2004-1920 – router with a "super" username and password;

  • CVE-2004-2556 – access point with a "super" username (again) and not a super password "5777364";

  • CVE-2004-2557 – which is the result of the CVE-2004-2556 "fix"(at least the login is no longer "super");

  • CVE-2012-1288 – hardcoded credentials for an administrative account;

  • CVE-2012-2949 – a hardcoded password for an Android app

  • and so on.

One more reason to run the analysis regularly

The conventional wisdom is that a static analyzer may be used once every few months, just before the release (or even once a year). That's a rather strange choice. Fixing errors that have been accumulated over a lot of time is much more difficult than fixing the code that you just wrote before committing. All the more, thanks to incremental analysis, the check will run much faster.

In many cases, the analysis of commits and pull requests would be a convenient option. This will increase the reliability of an application under development even more. After all, the code containing errors will not get into the main branch of the repository. This will help the developer who suddenly forgot to run the analysis. To find out more about the configuration of the pull requests check, read the documentation (see section Deploying the analyzer in cloud Continuous Integration services).

A new opportunity to search for sensitive data in code again confirms regular analysis usage. It's handy both on programmers' computers and within CI. Even if a programmer puts some passwords in the source code, the analyzer will warn them about it. If necessary, a developer can read the V5601 diagnostic documentation to see where the danger lies.

If you rarely run the analysis, hardcoded data turns out to be stored in the source code for a long time. It's pretty bad for an open-source project. By the time the analyzer finds a problem, data will no longer be confidential. However, other projects are not secured too. What if a user gets, let's say, a beta version of the app? You may receive such version between releases. If you don't check source codes regularly, a static analyzer won't check code in such a version. All the data hidden in the source code happens to be publicly available.

Conclusion

PVS-Studio is constantly evolving. We add new diagnostic rules, refine some existing mechanisms along with some new development opportunities. It's worth noting that, in a large part, an ongoing dialogue with users makes the analyzer better. The V5601 diagnostic rule is just one of the elements which help the analyzer to improve code security.

How about you try PVS-Studio in your projects? You can get it for free. Just follow the link and fill out a simple form. Well, that is that. Thank you for your attention. See you soon :).

Комментарии (0)