We learned from the previous article that SOCs/Incident Response teams should be looking for threats that represent high-level risks to the normal business activities.
We know the who, but how can we define what needs to be protected?
Assume your company has over a thousand business applications. They are hosted in multiple data centres as well as in the cloud. There are Windows and Linux hosts, and many of these are not patched of course. On top of that, nobody knows who owns them.
The following article cuts through this complexity and explains a simple approach.
The Low Hanging Fruits might be Rotten
The Security Operations Centre (SOC) might be tempted to ignore the company’s applications, and go for the low-hanging fruits instead.
A typical case is monitoring AV and proxy logs for specific events. If someone clicks on a suspicious link – hosting malware or exploit kit – the event raises an alert at the SOC. An army of SOC analysts rolls into action, the PC gets wiped, further phishing emails are blocked, and everyone is happy. Disaster averted, right?
Wrong, because your SOC did not protect your company’s business interests. The low-hanging fruit approach does not prevent actual breaches, such as:
- A script kiddie dumps customer data from a public facing web app via SQL injection (TalkTalk or VTech hack)
- Malware tailored for your bank’s applications cancels ATM withdrawals after the money is dispensed
- Malware manipulates currency exchange rates by placing fake orders en masse
- Someone uploads customer data to Pastebin, GitHub Gist or BitTorrent (victims include Uber, British Gas and Electronic Arts) and nobody had an idea how the data got out
The Right Approach to Event Log Monitoring
There are two competing approaches to enable the SOC to detect malicious activity. We can either collect logs from:
- Business application stacks (application, database, web server, OS)
- Infrastructure elements (proxy, Netflow, SMTP, AV, IDS/IPS, endpoints)
But which one? If we only focus on business application stacks, the SOC could not detect things like lateral movement, Mimikatz usage, ransomware and phishing campaigns. Conversely, if the SOC only monitors logs from infrastructure elements, we would miss tampering with the application logic (like reversing ATM withdrawals).
The valid answer is that both approaches are valid. Application logs and infrastructure logs are both required for an all-around defence.
Event Logs from Business Applications
The primary goal of application log monitoring is to protect the Confidentiality-Integrity-Availability holy triad of the application.
However, each business application is different. Therefore, application owners shall provide a list of abnormal events. For instance, it is certainly not expected that someone transfers $1 trillion dollars from a bank account by exploiting a validation bug.
Based on the list of adverse events, the SOC can now write their playbooks to handle these events. The application owner is also expected to contribute to the playbook, as remediation steps may involve the application itself. For instance:
- A payment transaction needs to be rolled back
- A user account has to be disabled or its password changed
- The application has to be taken offline
As an added benefit, application logs also help SOCs investigating data breaches. For example, revealing that a former employee keeps accessing his supervisor’s inbox for insider tips.
Event Logs from the Infrastructure
On the other hand, infrastructure logs are invaluable in detecting attacks against the company. For example:
- Phishing campaigns distributing ransomware (e.g. by alerting on IoCs)
- Lateral movement across the infrastructure (e.g. revealing Mimikatz with honey hashes)
- Detecting C2 communication
- Detecting exploitation attempts (Flash, SQLi)
Infrastructure logs can also be used for things like:
- Data breach investigations (was the data exfiltration successful?)
- Identifying other affected devices across the infrastructure
- Pro-actively searching for indicators of compromise (aka. “hunting”).
Use cases are mainly developed by the SOC, as the team is supposed to be up-to-date on the latest and greatest threats out there.
The final stage is the development of playbooks for handling alerts. The SOC should also be able to identify the logs they require, such as:
- Proxy logs
- DNS query logs
- Netflow logs
- HIPS/HIDS logs
- SMTP logs
As with application logs, infrastructure logs also enable SOC analysts to investigate and manage intrusions and data breaches. Therefore, infrastructure logs should be retained for a longer period.
Successful SOCs/Incident Response teams should collect, crunch and monitor event logs from two different event log streams: application and infrastructure.
Logs from application stacks are useful to detect and mitigate attacks against business logics of the applications. Because app owners have the best understanding of their application, they should define the suspicious events to be managed.
Event logs from the infrastructure are indispensable when it comes to attacks against the company as a whole. Infrastructure logs allow the SOC to detect malicious activities, for instance, phishing campaigns, C2 and backdoor traffic, and lateral movement. Infrastructure logs can also support pro-active “hunting” activities.
Both log sources help with data breach investigations. Therefore, it is sensible to retain the logs for an extended period, as data breaches are usually detected months later from the initial intrusion.
Photo courtesy of The Brunei Times