Thursday, July 8, 2021

Creating regular expression for Qradar

 To deal with these, we first need to properly extract the second double-quoted field. Note that Apache log files use backslashes to escape extra quotes or other special characters. This means naive regular expressions such as "[^"]*" aren't good enough.

Using grep to extract the referrer field (second double-quoted field):

grep -oP '^[^"]+"[^"\\]*(?:\\.[^"\\]*)*"[^"]+"\K[^"\\]*(?:\\.[^"\\]*)*(?=")' logfile.txt

Looks crazy! Let's break it down:

  • The o argument to grep means we just get the matching part of the line, not the rest of it
  • The P argument to grep tells it to use Perl-compatible regular expressions
  • The overall structure of the regular expression in use here, ...\K...(?=...), means we are checking the whole pattern but only the things between the \K and the (?=...) will be output

Breaking the regular expression down further:

  1. ^[^"]+ – Get everything between the start of the line and the first "
  2. "[^"\\]*(?:\\.[^"\\]*)*" – Get the entire first double-quoted string. 
  3. [^"]+ – Get everything between the two strings
  4. "\K[^"\\]*(?:\\.[^"\\]*)*(?=") The same as above, but we have the \K after the first " to start matching data after that and the (?=") to stop matching data before the last ".


After this point the data will be much easier to process because you no longer have to worry about the quotes and extracting the field properly from the log file.

For example, you could pipe the output into another grep:

grep -oP ... logfile.txt | grep -oPi '^https?://www\.google\.com/search\?\K.*'

Here the i option to the second grep makes it case-insensitive.

Alternatively, you could add the check for the the start of the google.com referrer directly into the first regular expression and move the \K as appropriate, but I would recommend against this since it's better to run two regular expressions which both do one job and do it well than to combine them into one where its job is not clear.

Note that if you want to collect referrers from other Google domains you will need to modify your regular expression a fair bit. Google owns a lot of search domains.

If you didn't mind potentially catching a few non-Google sites, you could do:

... | grep -oPi '^https?://(www\.)?google\.[a-z]{2,3}(\.[a-z]{2})?/search\?\K.*'

Otherwise you would need to attempt to match only Google-owned search domains, which is a constantly moving target:

... | grep -oPi '^https?://(www\.)?google\.(a[cdelmstz]|b[aefgijsty]|cat|c[acdfghilmnvz]|co\.(ao|bw|c[kr]|i[dln]|jp|k[er]|ls|m[az]|nz|t[hz]|u[gkz]|v[ei]|z[amw])|com(\.(a[fgiru]|b[dhnorz]|c[ouy]|do|e[cgt]|fj|g[hit]|hk|jm|k[hw]|l[bcy]|m[mtxy]|n[afgip]|om|p[aeghkry]|qa|s[abglv]|t[jrw]|u[ay]|v[cn]))?|d[ejkmz]|e[es]|f[imr]|g[aefglmpry]|h[nrtu]|i[emoqst]|j[eo]|k[giz]|l[aiktuv]|m[degklnsuvw]|n[eloru]|p[lnst]|r[osuw]|s[cehikmnort]|t[dgklmnot]|us|v[gu]|ws)/search\?\K.*'

Also note if you want to include Google's image search and other search subdomains, you will need to change the (www\.)? in one of the above grep commands to something like ((www|images|other|sub|domains)\.)?.

Tuesday, April 10, 2018

Onion Elastic Stack General Availability Release and Security Onion 14.04.5.11 ISO Image!

Elastic Stack integration has now reached General Availability (GA)!  This includes a new 14.04.5.11 ISO image that contains these GA components and all the latest Ubuntu and Security Onion updates as of March 28, 2018!

GA Highlights

Monday, October 23, 2017

Pen Testing Checklist for Cloud

Start with the Contract. Your Cloud services are provided under contract between you and your CSP. This forms the base of the relationship, and defines what activities each party is responsible to perform. Not all CSP’s are the same, nor are all contracts identical. Some will have various tiers of service, others may provide a base offering with additional “add-on” options. Whatever your situation, it is vital to have a clear understanding of R&R, policies, service commitments, and restrictions.
  1. Check the Service Level Agreement (SLA) to ensure the appropriate Pen Test policy has been identified, and R&R clearly defined. In many cases, elements of Pen Testing are spread across multiple players such as the CSP and the client, so it is necessary to clearly document who does what, and when it is to be done.
  2. Governance & Compliance requirements need to be understood. Factors need to include which party will be responsible to define, configure and validate security settings required to meet applicable regulatory controls for your business. This includes providing appropriate evidence for audits and inspections.
  3. Security and Vulnerability Patching and general maintenance responsibilities and timeframes need to be documented. You as the client may have responsibility for maintaining your virtual images and resources, but the CSP will likely be accountable for the underlying physical hardware systems. Both need to be actively managed, along with all network and SAN equipment.
  4. Computer access and Internet usage policies need to be clearly defined and properly implemented to ensure appropriate traffic is permitted while inappropriate traffic is denied at the perimeter.
  5. Ensure all unused ports are disabled and unused protocols are either not installed or disabled and locked down to prevent unauthorized activation.
  6. Data encryption while both in transit and at rest is becoming more common, but never assume. Ensure that encryption is either set as the default or that appropriate steps are implemented to ensure it is activated.
  7. Verify that your requirements for Two Factor Authentication and One Time Passwords are implemented and actively securing network access. Check if your CSP permits any bypass scenarios.
  8. SSL is only as good as the Certificate Authority (CA) that issued the certificates. Ensure SSL is active, and that a reputable CA stands behind the certificates.
  9. Hold your CSP accountable and validate they are using appropriate security controls for physical and logical access to the data center and the infrastructure hardware with which they provide your services.
  10. Know your CSP’s policy and procedures relative to data disclosure to third parties, both for unauthorized access and providing data when requested or subpoenaed by law enforcement.
More than just running a scan, Pen Testing requires an understanding your environment and the associated roles and responsibilities and even liabilities between you and your CSP.

PCI Pen Testing Requirements

Scanning Isn’t Testing

Requirement 11 is the juggernaut of PCI v3.2. It’s packed full of objectives involving network scanning (11.2) and penetration testing (11.3). Just because scanning and testing are lumped into the same PCI requirement, doesn’t mean they accomplish the same goals.

The PCI Standards Security Council is helping to clear the air with a couple of simple descriptions.
Vulnerability Scanning: Identifies, ranks, and reports on vulnerabilities that, if
exploitedmay result in an intentional or unintentional compromise of a system. Translation? It’s an automated process taking a short amount of time, with no verification and very high chance of false-positives.
Penetration Testing: Identifies ways to exploit vulnerabilities to circumvent or defeat the security features of system components. Translation? It’s a manual or automated testing process that uses vulnerability scanning results as a baseline, lasting days or weeks depending on the scope.

Testing Ain’t Easy

Whether you use a tool or a service, complying with PCI Requirement 11.3 is a formidable task taking a combination of resources, time and a little bit of planning. In its simplest form, Requirement 11.3 mandates internal and external penetration testing at least annually or after “any significant infrastructure or application upgrade or modification”. Considering the volume of iterations of common operating systems and applications, I’d say we can throw out the idea of only testing annually. So, how do you test regularly and stay agile?
  1. The right person or service: Take a look to see if any of your team members have pen testing experience. The great thing is that some penetration testing products, like Core Impact, have the option of using wizards to run various tests. This helps automate some of the more laborious tasks, while accomplishing the same goals in less time. As a supplement to this, try bringing on a more seasoned penetration tester or dedicated third-party security service.
  2. The right tool: Depending on your needs and overall requirements scope, the tool you require may differ. Whatever it is, you need to be able to test regularly as things change in your environment. So, if you have one person on your team who only uses an open-source command-line based pen testing tool, with no other options, this could leave you in a bit of a pickle if he or she leaves. Be sure to have a backup plan.
  3. The right methodology: Penetration testing doesn’t just happen and disappear. According to the new PCI Security Standards Council penetration testing guidelines, it requires a phased-approach involving pre-engagement, engagement, and post-engagement steps. These steps cover important topics such as timing, frequency, reporting, systems to target, success criteria etc. Preparing properly will help you stay on target, meet the PCI requirements, and not become overwhelmed. Well, not too overwhelmed at least.

Friday, February 17, 2017

Security and Sandbox

It’s time to go beyond using sandboxing as a standalone capability in order to get the most out of it. You need a more robust malware analysis tool that fits seamlessly into your infrastructure and can continuously detect even the most advanced threats that are environmentally aware and can evade detection.
There are three typical ways that organizations purchase and deploy sandbox technology.
  1. A stand-alone solution designed to feed itself samples for analysis without dependency on other security products. This has the most flexibility in deployment but adds significant hardware costs and complexity to management and analysis, especially for distributed enterprises.
  2. A distributed feeding sensor approach, such as firewalls, IPS, or UTMs with built-in sandboxing capabilities. These solutions are usually cost effective and easy to deploy but are less effective in detecting a broad range of suspicious files including web files. They can also introduce bandwidth limitations that can hamper network performance and privacy concerns when a cloud-based solution is the only option.
  3. Built into secure content gateways, such as web or email gateways. This approach is also cost effective but focuses on web and email channels only and also introduces performance limitations and privacy concerns.


Arcsight Threat Tracking Method

This Activate Method is used to track attackers and target system state and their progression through the attack life cycle. It consists of a set of rules that update the attacker and target threat scores as well as their progression and indicator and warning frequency within the attack life cycle. All this information is tracked in a set of lists with varying TTL for the entries.
For this to work efficiently, indicators and warnings have predetermined categorization requirements. For example, an IDS reporting shellcode over the wire is tagged with Category Custom Format Field = "/Attack Life Cycle/Delivery". The rules look for these conditions populate attacker and target information in the appropriate lists as well as the threat score tracking information.
ThreatTrackingEventFlow.jpg

Attack Life Cycle

The Attack Life Cycle lists are very straight forward. There is one list for each phase of the attack life cycle and the rules for getting attackers and targets into the lists follow two simple laws:
  • Attackers and Targets can live in multiple list at the same time
  • Only Activate rules will populate the lists
NOTE:
  • Custom Category Format Field will be used to move data into the appropriate list
  • All content is stored under:
    /All /ArcSight Activate/Use Cases/Threat Tracking/Attacke Life Cycle/ System Perspective/
ListsDescription
Phase 1 ReconnaissanceWill track attackers and targets that are conducting research and identification of targets. If the attacker's target can be derived from the event/analysis, then the target will also be tracked. Typically, these indicators and warnings and found by monitoring NIDS, HIPS, firewall ACLs, and web analytics.
Phase 2 WeaponizationWeaponization is not normally 'viewable' as the attacker normally creates weaponized packaged on systems that we are not in control of. This category will be used for file analysis tools that detect known IOC. (Mandiant, STIX, Tripwire)
Phase 3 DeliveryIndicators and warnings that intercept the transmission of executable code to a target. NIDS/HIPS/Proxies/in-line AV events are sources capable of detecting these events.
Phase 4 ExploitationExecution of the attackers code. This list will track when code is executed either by the user or by exploiting a particular vulnerability. Indicators are warnings are detected at the OS/HIDS level. (SRP/ASLR/DEP)
Phase 5 InstallationInstallation of remote access code. This can be detected at the OS/HIDS/AV
Phase 6 C2Track attackers and hosts that are displaying beaconing characteristics. These are often detected by NIDS/FW acl/Honeypots/DNS, and detection can be enriched with external intel data
Phase 7 ObjectivesPost "pawnage" activities (data exfil, corruption, destruction, pivots).