I’ve been asked on multiple occasions if we could write a custom detection as a “compensating control” to mitigate the risk of a vulnerability being exploited, in order to buy system admins more time or better yet to altogether negate the need to take a critical business service offline for patching. This article will explain why the answer has never been “yes”, due to the inherent limitations of Threat Detection and Detection Engineering.
We reported last week on the widespread exploitation of a critical vulnerability in the PaperCut print management software, perpetrated by CL0P and LockBit ransomware affiliates and helped along by a public PoC exploit released by researchers at Horizon3.
On the flip side of that were defenders, creating and sharing detection rules to assist those who couldn’t/hadn’t yet patched the vulnerability, or who wanted to monitor for possible exploitation as a precaution.
While a variety were fielded utilising Sysmon logs, device-generated logs, and even network telemetry – the team at Vulncheck last week proved why detections can’t truly compensate for a lack of patching, by bypassing all of them in a revised PoC.
Now you see me, now you don’t
The Sysmon detections largely hinged on alerting where the PaperCut application process (pc-app.exe) spawned an unexpected child process like cmd.exe or powershell.exe.
This can be – and indeed already is – circumvented by spawning a reverse shell in a way that requires an intermediary process, for example when dropping a Java instance of the Meterpreter shell:
Figure 1: modifying the PoC to drop the Java Meterpreter shell will circumvent this detection logic
While Horizon3 researchers were right to point to native log entries indicative of the exploits targeting of the software’s print scripting interfaces – this, again, was bypassed by Vulncheck who instead found they could abuse the User/Group Sync “custom programs” to achieve the same effect while also avoiding triggering those alerts altogether.
Opalsec is a reader-supported publication. To receive new posts and support my work, please consider becoming a paid subscriber!
The Suricata IDS rule proposed by Proofpoint also proved brittle – because the rule searched for the specific string “/app?service=page/SetupCompleted” as an indication of attempted exploitation, it can also be bypassed by inserting junk values that wouldn’t impact the request, such as page//SetupCompleted or random=1&page/SetupCompleted.
It’s an Art, not a Science
Naturally, the original rules could be modified to alert on a child process of java.exe or to search for a string consistent across variations such as “SetupCompleted” – but that would introduce a high risk of generating false positive alerts.
This brings me to the point of this post – Detection Engineering is a constant battle to maintain a (heavily subjective) balance between being specific enough to generate as few false positive alerts as possible, while also being broad enough to account for attacker permutations and alternate components that can be inserted into the exploit chain to evade our detections.
The PaperCut vulnerability is the perfect example of why detections are not a substitute for mitigation – the quality of your detection hinges entirely on how well you’ve accounted for all possible implementations or variations of a technique or procedure.
Should the authors of the first round of detections have been expected to know that User/Group Sync “custom programs” could also be targeted to exploit the vulnerability? Probably not, but the confidence in those detections – which many security teams would be relying on – would have still been moderate to high, in the absence of evidence that they had in fact missed another attack path for the vulnerability.
In other words – you can’t know what you don’t know, and that’s a problem when an organisation is reliant on detections for mitigation.
The saga continues…
While I’ve got you – if two ransomware groups and a publicly available exploit PoC weren’t enough to get you patching – Microsoft have also reported spotting Iranian actors getting in on the action, with Mint Sandstorm (formerly PHOSPHORUS) and Mango Sandstorm (formerly MERCURY) observed opportunistically exploiting the vulnerability across a range of industries and geographies.
This all kicked off on Wednesday with SentinelOne’s release of a post that looked at a campaign dubbed “Smooth Operator”, which essentially found that the 3CX Voice Over Internet Protocol (VOIP) desktop client – used by some 600,000 companies worldwide and over 12 million daily users – had been compromised with a malicious update.
Moreover, Huntress Labs found 242,519internet-exposed 3CX phone management systems as of the 30th March, and a further 2,783instances in their customer networksrunning the trojanized software.
What can I do?
Well for one – ensure you’ve removed any exceptions that might have been created for the application.
The compromised software has been assigned CVE-2023-29059, and impacts the following versions on Windows and MacOS:
Windows: versions 18.12.407 and 18.12.416 of the Electron Windows application shipped in Update 7, and
Versions 18.11.1213, 18.12.402, 18.12.407, and 18.12.416 of the Electron MacOS application.
Immediately uninstall any affected versions of the product, and perform hunting using IOCs noted in the articles in the Further Reading section at the bottom of this post.
Florian Roth and his team have also shared several Sigma and YARA rules to help identify compromised files that were leveraged in the attack.
How did this happen?
Customer reports of the trojanised application being quarantined by antivirus products first began surfacing the week prior, on the 22nd of March, though SentinelOne have reported observing activity as far back as March 8th.
Analysts at Volexity have found the domains and web infrastructure used in the attacks were registered as early as November 2022, and infrastructure used by the Windows variant were activated on December 7th, 2022.
Huntress Labs have taken it one even further, having found network infrastructure being established as far back as February 2022:
Figure 1: The planning and execution was months in the making
Whose supply chain was hit?
ReversingLabs have done some great analysis of the potential origin of this attack, pointing to either a compromise of the 3CX development pipeline (a la SolarWinds) or a malicious upstream dependency, the kind we often see impacting package repositories like PyPI, Maven, or npm.
3CX were quick to point fingers at ffmpeg – the upstream code supplier for the trojanized ffmpeg.dll binary – but they clearly weren’t in the mood for it and told 3CX to double-check their homework:
There have been several incorrect reports that FFmpeg has been involved in the distribution of malware.
FFmpeg only provides source code and the source code has not been compromised. Any “ffmpeg.dll” that has been compromised is the responsibility of the vendor.
That said, Volexity researchers are right to point out that in order to have trojanized the software’s updates so effectively, the actors would have lingered in 3CX’s network for some time – sufficient enough to “develop an understanding, access, and malicious code for the development-update process of the company”
3CX have engaged Mandiant to conduct an investigation, which is ongoing as of the time of writing.
How does the attack work?
A trojanized update which was sent out to customers included a modified and malicious version of the legitimate ffmpeg.dll and d3dcompiler.dll binaries, which then retrieved an obfuscated and encrypted .ICO file from an attacker-controlled Github repository.
This subsequently dropped an info-stealing payload which Volexity have dubbed “ICONICSTEALER”. The 64-bit DLL was compiled on March 16, and is “designed to collect information about the system and browser using an embedded copy of the SQLite3 library.”
A malicious update was also issued for the MacOS version of the 3CX installer, and it appears to have actually pre-dated the Windows attacks, with the earliest vulnerable version – 18.11.1213 – being deployed in January this year.
You can find a more detailed analysis of the execution chain in Patrick Wardle’s blog, but for a quick overview, Thomas Roccia (a.k.a @fr0gger_) has you covered:
Signed =/= Trusted
Notably, ReversingLabs reported in their analysis of the Windows sample that they “identified signatures in the appended code pointing to SigFlip, a tool for modifying the authenticode-signed Portable Executable (PE) files without breaking the existing signature.”
This was elaborated on by Will Dormann, who pointed to its apparent use to abuse a 10 year old flaw – CVE-2013-3900 – classed as a “WinVerifyTrust Signature Validation Vulnerability.”
The vulnerability would allow an attacker to append content to the authenticode signature section (WIN_CERTIFICATE structure) of a signed executable – without invalidating the signature.
While a fix was issued back in 2013, Microsoft made it opt-in, as it could break functionality of legitimate apps such as Google Chrome, which modifies the Authenticode signature as part of denoting if diagnostic logs are meant to be collected and sent.
The result? This was abused to append a malicious payload to a usually legitimate DLL signed by Microsoft named d3dcompiler_47.dll, with the signature left intact and incorrectly marking the file as being unaltered and verified by Microsoft.
Figure 3: The signature remains intact on the modified dll
This is significant as it allowed the attacker to bypass basic file-signing checks by both automated security controls and L1 security analysts. It would have also played a part in influencing system admins to dismiss EDR alerts as false positives, and to create security exceptions so the apparently untampered software could continue to run.
So, who did it?
CrowdStrike have attributed the attack to a group they track as Labyrinth Chollima, a DPRK-aligned actor with form conducting cyber espionage, cryptocurrency theft and destructive attacks. Their analysis found “the HTTPS beacon structure and encryption key match those observed by CrowdStrike in a March 7, 2023 campaign attributed with high confidence to DPRK-nexus threat actor LABYRINTH CHOLLIMA.”
Analysts from Sophos and Volexity has also corroborated this attribution to some degree, with Sophos noting the code was “a byte-to-byte match” with what has been seen in previous activity by the Lazarus Group – the catch-all threat group for DPRK-aligned attackers – and Volexity finding the specific shellcode sequence “appears to have been only used in the ICONIC loader and the APPLEJEUS malware, which is known to be linked to Lazarus.”
Given the wide range of objectives that Labyrinth Chollima – and Lazarus Group, for that matter – have sought to achieve over the years, it’s unclear what their intent was. The fact that the delivered payload was designed to pilfer browsing history from impacted victims indicates that this may have been the first step in a more prolonged, and likely targeted campaign.
How has this been handled?
In six words – very, very, very, unbelievably, cringe-tastically, poorly.
Unfortunately, despite receiving dozens of reports from users of multiple EDR products (SentinelOne, CrowdStrike, ESET, Palo Alto Networks, and SonicWall, to name a few) flagging the VOIP client as malicious, 3CX simply responded by telling customers to add exclusions to allow it to continue to run, and to follow-up with their EDR vendor to resolve the problem.
In an interview with CyberScoop, 3CX CEO Nick Galea noted that antivirus products flagged their software as malicious “quite frequently — so I have to be honest we didn’t take it that seriously […] we did upload it to a site called VirusTotal to check […] and none of the anti-virus engines flagged us of having a virus, so we just left it at that.”
For those wondering what’s wrong with this approach, this Tweet explains succinctly why that is not an adequate validation process for a potential supply chain attack:
So currently expecting a supply chain pwned full installer package that is signed using a cert that is probably whitelisted by not 1-2 vendors to be well detected from the second it was first seen on VT is very very naive, to say it nicely. (2/2)
If that’s not enough to have your head spinning, cop this – Galea only acknowledged the vulnerability on forums on the 31st of March – more than a week after users began reporting the issue – and claims “it was only reported to [them] yesterday night.”
Figure 5: My boy Jackie knows what’s up
True to form
A post by respected researcher Kevin Beaumont shows this may be symptomatic of the security culture at 3CX, as he highlighted that when he attempted last year to report a vulnerability that “3CX took little responsibility, didn’t fix it, and started arguing on Twitter”.
The vulnerability? That files – including the admin password – could be read in plaintext.
Non-transitive trusts (a.k.a external trusts) – as described by Microsoft – are designed to “deny trust relationships with other domains”, or in other words, only the two domains involved in the trust will be able to authenticate to each other.
Unfortunately, researchers from Semperis have discovered that non-transitive trusts can – contrary to their design intent – allow authentication across domains, as well as potential privilege escalation within the trusting domain.
Breaking the Trust
In the diagram below, a non-transitive trust exists between semperisaz.lab and grandchild1.child1.semperis.lab. This allows for a referral TGT – which is used to request Service Tickets for any service within domains with an established trust path – to be requested for grandchild1.child1.semperis.lab.
However because it’s a non-transitive trust – there isn’t a trust path between semperisaz.lab and semperis.lab, and attempting to obtain a referral to this domain fails – as expected.
Figure 1: Non-transitive trusts prevent (direct) authentication to disallowed domains.
The way to circumvent this protection is through using a “local” TGT – i.e. a TGT for grandchild1.child1.semperis.lab (the domain for which the non-transitive trust exists) instead of semperis.lab – to request the referral for the domain for which no direct trust exists.
Figure 2: The “local” TGT can then be used to request a referral for the secondary domain.
While this technique stops short of allowing an attacker to perform “trust hopping” into another forest, Semperis points out the implications of even this limited scope.
“Attackers could query domain information from supposedly disallowed domains, query more sensitive domains or domains with potentially weaker security, or perform Kerberoasting attacks or NTLM authentication coercion on domains that are assumed to be disallowed.”
Pivoting using machine accounts
Semperis have been able to chain this technique with one they previously disclosed, in order to extend the use of local TGTs to enable trust hopping to a forest with which no trusts exist.
Continuing from where the previous scenario left off, the referral TGT for the semperis.lab domain can be used to retrieve a Service Ticket for the LDAP service, which can then be abused to create a machine account in that domain.
Figure 3: Creating the TestComp account in semperis.lab
This account essentially serves as a beachhead within the semperis.lab domain from which we can repeat the exploitation of the flaws found in AD non-transitive trusts.
The machine account’s TGT requests a referral to the trusting domain of treetest.lab, which is then used request a “local” TGT from treetest.lab.
Figure 4: The machine account retrieves a local TGT for the intermediate domain
This local TGT can then be used to request a referral from the DC of the treetest.lab domain to the dsptest.lab domain – which should have been out-of-bounds of an account in semperis.lab, according to the design intent of non-transitive trusts.
Figure 5: The machine account on semperis.lab can now authenticate to Services in the dsptest.lab domain, for which no trust exists.
“It’s not a vulnerability, so – no.”
Unfortunately, Microsoft believe this flaw can’t be classified as a vulnerability, and as such – won’t be taking any action to rectify it.
Figure 6: Microsoft’s response to Semeperis’ bug report
The only “fix” to the solution is to disable any non-transitive trusts you may have in your environment.
Failing that, Semperis recommend auditing Windows 4769 events (A Kerberos service ticket was requested), specifically:
Where a local TGT is requested – the domain (Account Domain field) is for a different forest, and the Service Name is krbtgt;
A second event which follows, requesting a referral TGT – the domain (Account Domain field) is a domain in a different forest, and the Service Name is another domain within the local forest.
They also recommend disallowing Authenticated Users from creating machine accounts, in order to mitigate the ability to extend this flaw into additional forests.
Attackers have had themselves a field day abusing a vulnerability in Fortinet’s FortiNAC appliance, thanks in no small part to a PoC exploit which was released by security research company Horizon3 – just two business days after Fortinet warned customers to patch the vulnerability.
Figure 1: A webshell deployed via the PoC exploit, executes base64-encoded commands sent via HTTP POST requests.
Is two days enough?
While it’s not surprising that attackers were quick to capitalise on the weaponised exploit, it’s difficult to understand Horizon3’s reasoning for having released it as soon as they did.
Sure, you might argue that for large organisations lucky enough to have formalised vulnerability management teams and processes – they may have been able to identify and patch the vulnerability in the two business days separating the disclosure of the vulnerability and release of the PoC.
Smaller entities, however – think regional hospitals, public schools, or even small-scale MSSPs – will likely have neither the capacity or ability to do the same.
And this isn’t that far-fetched a scenario either – this hypothetical lines up very neatly with six of Fortinet’s publicly listed case study clients who use FortiNAC appliances in their networks:
Figure 2: FortiNAC customers aren’t the high-rollers you may think they are
Timing is everything
Releasing PoC exploits can help defenders better understand the vulnerable attack surface and detect attempts to exploit it – that’s great, and you’ll find no arguments from me on that.
Where this becomes counter-productive, though, is when the PoC exploits – weaponised or not – are released before organisations have the chance to patch the vulnerability they abuse.
There are a myriad of reasons why organisations may be seen to be dragging their heels on patching, many of which are out of the control of the security team or broader organisation, for example:
The organisation is coming up to an important event where absolutely nothing can go wrong – e.g. they’re about to be listed on a stock exchange, their latest product is about to be released for sale, or a long holiday period is coming up – in these cases a “Change Freeze” can be put in place that prevents security teams from making any changes without running a gauntlet of internal approvals;
The software requires staged updates to multiple components before the vulnerable asset itself can be patched – this can take time and co-ordination from multiple internal and vendor stakeholders;
The asset provides critical functionality, and is only able to run as intended on a legacy, vulnerable version. Believe it or not, business requirements can supersede security risks, with “compensating controls” such as “enhanced monitoring” often accepted as a substitute for eliminating a vulnerability – regardless of how critical it is. Ask anyone who’s run a Penetration test on a hospital what the oldest version of Windows they’ve seen on the network is – try not to grimace in disgust when they do.
Onwards and upwards
Security research is an invaluable input to Cyber Defence functions, as it provides actionable insights into attacker techniques, security vulnerabilities, and more – all of which defenders must understand, in order to protect against them.
I’ve been lucky enough to work in several roles on the defensive side, and have seen first-hand how bureaucracy, poor solution design, and convoluted chains of approvalcan run down the clock when trying to patch security gaps.
My point is – it’s not always for lack of trying, and sometimes all we need is more time.
Oh, and it goes without saying – though just in case it needs to be said – I didn’t write this piece to take aim at Horizon3.
Their work is great – their timing on this one simply presented the opportunity to address a systemic problem in vulnerability disclosure and management.
I have nothing but respcpt for their team, and wish them all the best.
With the heyday of macro-enabled spreadsheets and documents behind us, threat actors have experimented with novel ways to deliver their payloads, including disk image files (.iso, .vhd files), HTML Smuggling (.hta files with embedded scripts), and now OneNote files.
While actors can’t embed VBA macros in OneNote files like they can with Word and Excel documents, it does provide a number of other significant advantages:
OneNote files are not affected by Protected View/ Mark-of-the-Web;
It allows embedding Malicious Excel/Word/PPT files that will be played without protected view;
HTA, LNK, EXE files and more can be embedded in the document, with the extensions spoofed;
The document can be formatted in order to trick users into opening a malicious file or a link;
Maldoc creation can be automated using the OneNote.Application API and XML.
For a full overview of its potential, have a look at the full article assessing its viability for Red Team activities here.
Who’s using it?
Numerous actors – including Initial Access Brokers – have integrated OneNote files into their infection chains, with the end result ranging from credential theft to deployment of secondary malware – some of which are known to lead to ransomware infections.
Qakbot – a prolific malware family that enables secondary infections which can lead to ransomware deployment;
IcedID – similar to Qakbot, this malware is widely spread and can enable ransomware attacks;
ASyncRAT & xworm – ASyncRAT is a popular, publicly available RAT that is deployed to maintain attacker access to a compromised system. xworm is a stager malware that delivers other payloads while also retaining basic infostealing capabilities;
The RedLine infostealer, and Remcos RAT – RedLine is a highly capable and widely used infostealer, while the Remcos RAT is an open-source trojan that is used to facilitate network intrusions.
Cyber security vendor Proofpoint have also flagged that the Quasar and NetWire RATs; DOUBLEBACK malware, and AgentTesla infostealer were all also observed being delivered via campaigns using OneNote lures.
Figure 1: OneNote Campaigns really accelerated in January
How does this work?
Overview
Similar to traditional Excel and Word document lures, OneNote lures have largely masqueraded as an invoice, remittance advice or other document that the target is urged to view.
Upon opening the document, instead of asking a user to click “Enable Content”, the lure prompts them to double-click a fake “Open” button:
Figure 2: OneNote lures still require some social engineering
This button simply sits over an embedded .hta file, which is executed when the user attempts to double-click the button overlay:
Figure 3: A Qakbot OneNote lure that executes a malicious .hta file when the user double-clicks “Open”
The choice of file that is executed has varied between campaigns and actors, with shortcut files (.lnk), script files (.hta, .vbs) or Windows script files (.wsf) the most commonly observed.
Other file types such as Javascript (.js, .jse), Visual Basic (.vbe) Windows script files (.bat, .cmd) and more can also be used in their stead.
Example Attack – Qakbot
Max Malyutin was one of the first to flag the adoption of OneNote files by the actors distributing Qakbot, with their lures going virtually undetected by antivirus engines at the beginning of their campaign.
Figure 4: Low detection rates for Qakbot’s initial campaigns
The lure used was as above, with a malicious .hta file executed when the user double-clicked the lure.
This invoked curl to download a secondary payload – the Qakbot malware – which was then executed by rundll32.exe and injected into the wermgr.exe process.
Figure 4: The Qakbot infection chain, injecting the 2nd stage payload into wermgr.exe
What’s the point?
OneNote files aren’t subject to the same Mark-of-the-Web restrictions (i.e. the default blocking of macros in downloaded files) as Excel and Word documents.
This means that the convoluted .iso > .lnk mechanism that was adopted to circumvent this protection isn’t necessary, with the added benefit that opening a OneNote file is a much more familiar concept to end users than mounting a virtual disk image, making it a more believable lure.
Figure 5: OneNote files allow IcedID payloads to be delivered with less dependencies and steps
Attackers are also able to format the OneNote document to match the theme of the email and further add to the apparent legitimacy of the lure, while still enabling the embedding of malicious code and techniques such as HTML Smuggling.
How can I analyse these files?
A few tools have been flagged by the community, which can help in analysing OneNote files:
As demonstrated by malware analyst pr0xylife, OneDump.py can be chained with other commandline tools to yield quick results, especially where the OneNote file is used to download a 2nd-stage payload from a C2 address:
Figure 6: Didier Stevens’ commandline tools can be chained together to extract easy wins
OneNoteAnalyzer is a significantly more fully-featured tool, extracting metadata, attachments and images from the document for a more detailed review:
Figure 7: OneNoteAnalyzer dumping attached COM executables from the maldoc
For a more detailed walkthrough of the overall process, check out Josh Stroschein’s video that examines an ASyncRAT delivery campaign:
How can I detect it?
Examining files with YARA rules
The YARA rules created and shared publicly thusfar have focused on:
The “magic bytes” identifying OneNote files (0xE4525C7B);
@nas_bench from Nextron Systems has provided this Sigma rule that looks for OneNote files created in suspicious directories, which are commonly abused to drop downloaded files.
I’ve also had a go at creating a Sigma rule that looks for variations of the process tree you’re likely to see in a campaign leveraging OneNote files, including where they’ve renamed the system binaries being abused. You can find it here.
An exploit PoC has been shared publicly for CVE-2023-24055, which relates to the ability for an attacker to add an export trigger within the KeePass XML configuration file, enabling them to dump clear-text passwords from the Password Manager.
Figure 1: The credentials are dumped in plain-text to an xml file
The author of the PoC even helpfully provided a PowerShell one-liner to base64-encode the dumped passwords and exfil them via a HTTP POST request:
Figure 2: Credentials can be exfiltrated with basic, in-built tools
Working as intended
KeePass, however, have argued that this is not a vulnerability, as “the password database is not intended to be secure against an attacker who has that level of access to the local PC”, further insisting that “KeePass cannot magically run securely in an insecure environment.”
Essentially – it’s up to the end-user to adequately secure the product from tampering by malicious attackers.
The full scope of impacted versions of KeePass is still unknown, but the vulnerability is at least confirmed to be present in 2.5x versions.
Detection & Mitigation
While this vulnerability is still being disputed and a patch yet to be released, organisations and users relying on this product should monitor for file modification events of the config file (KeePass.config.xml), and investigate the feasibility of setting more restrictive ACLs to prevent unauthorised modification of the file.
Researchers from Akamai have released a technical write-up and PoC exploit for CVE-2022-34689, a critical vulnerability in the Windows CryptoAPI library that could enable attackers to spoof legitimate x.509 Certificates, in order to perform authentication or code signing as the spoofed certificate.
This could be abused by attackers to deliver malicious executables that appear to be signed by a legitimate code-signing certificate, or to perform MiTM attacks on encrypted network traffic.
Technical Details
The vulnerability stems from the CreateChainContextFromPathGraph function call in the crypt32.dll module, which validates cached certificates solely based on the value of the certificate’s MD5 thumbprint.
Therefore, if an attacker could serve a malicious certificate whose MD5 thumbprint collides (has the same value) with one that is already in the victim’s certificate cache, they would be able to bypass this check and have their malicious certificate trusted by the victim system.
MD5 Hash Collisions
For those thinking “isn’t it still technically impossible for us to create two different files that have matching MD5 values?” – you’d be right! That’s called a preimage attack, and while it can be done for more simple hash types like SHA1, it isn’t yet feasible for MD5.
The methodology used here is called a prefix collision, where two certificates that share the same prefixes are created, enabling them to have the same MD5 thumbprint.
The specifics of how the attack works isn’t something that can be summarised without getting overly technical, but the key points are that:
There’s a part of the x.509 certificate called the tbsCertificate (to-be-signed) or TBS – this contains all the identity-related values, such as the subject, extensions, Public Key, and more. This is what the Certificate Signature validates;
The attacker must modify the contents of the TBS – in particular replacing the Public Key value with an attacker-controlled Public Key value – which is what allows the attacker to sign as the malicious certificate;
The Certificate Signature is left untouched, which means the Certificate is incorrectly signed. This doesn’t matter for the purpose of this exploit though, as the vulnerable CryptoAPI library only compares the Certificate Thumbprint;
The other steps of the prefix collision attack are performed – manipulating the contents of the original and malicious certificates and copying the result of the MD5 prefix collision computation (i.e. the colliding MD5 thumbprint values) into both of the certificates.
Attack Flow
Based on the above, there are now three certificates to be aware of:
The original, unmodified Target Certificate;
The Modified Target Certificate which has the colliding MD5 thumbprint value inserted into it; and
The Malicious Certificate, which – in addition to having the colliding MD5 thumbprint – also contains the modified Public Key value in the TBS segment.
Akamai’s research team found that Google Chrome v48 and Chromium applications from that time were vulnerable to this flaw, and illustrated the attack flow using the modified certificates to perform a MiTM attack:
Real-World Impact
While Akamai’s research team have thusfar only identified Chrome v48 and associated Chromium applications as being vulnerable, they have indicated there are likely more vulnerable targets in-the-wild that have simply not yet been discovered.
Moreover, they noted that they “found that fewer than 1% of visible devices in data centers are patched, rendering the rest unprotected from exploitation of this vulnerability.”
Akamai have recommended using their supplied OSQuery search to identify and patch all impacted versions of the CryptoAPI library – including unsupported versions such as Windows 7/Server 2008. Developers can also use additional WinAPI functions such as CertVerifyCertificateChainPolicy to further validate the legitimacy of a certificate before trusting and using it.
The revolving door of maldocs continues, with OneNote documents the latest seen abused in-the-wild.
The collaborative file format has been leveraged in a limited number of campaigns to deliver malware, with ASyncRAT and xworm among the malware families seen distributed.
Uptake of the document format hasn’t been widespread just yet, but given the novelty and utility of the delivery method, it’s worth familiarising yourself with the tools and techniques needed to analyse such payloads.
A bit of background
While actors can’t embed VBA macros in OneNote files like they can with Word and Excel documents, there are a number of other advantages:
OneNote files are not affected by Protected View/ Mark-of-the-Web;
Allows embedding Malicious Excel/Word/PPT files that will be played without protected view;
Allows embedding HTA, LNK, EXE files and spoof extensions;
The document can be formatted in order to trick users into opening a malicious file or a link;
Can be automated using OneNote.Application and XML.
For a full overview of its potential, have a look at the full article assessing its viability for Red Team activities here.
Analysis Tips
Didier Stevens has shared this write-up of how he extracted an executable embedded in a OneNote file, along with a new Python script (still in beta) he created to help with the task.
If you’ve got more time to spare to watch a video, Josh Stroschein has shared this walkthrough of a OneNote file he picked apart:
Tools for Analysis
A few tools have been flagged by the community, which can help in analysing OneNote files: