Blog

Inside the SOC

Exploring AI Threats: Package Hallucination Attacks

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023
30
Oct 2023
Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Charlotte Thompson
Cyber Analyst
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Book a 1-1 meeting with one of our experts
share this article
USE CASES
Nessun articolo trovato.
COre coverage

More in this series

Nessun articolo trovato.

Blog

Nessun articolo trovato.

The State of AI in Cybersecurity: The Impact of AI on Cybersecurity Solutions

Default blog imageDefault blog image
13
May 2024

About the AI Cybersecurity Report

Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog continues the conversation from “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners” which was an overview of the entire report. This blog will focus on one aspect of the overarching report, the impact of AI on cybersecurity solutions.

To access the full report, click here.

The effects of AI on cybersecurity solutions

Overwhelming alert volumes, high false positive rates, and endlessly innovative threat actors keep security teams scrambling. Defenders have been forced to take a reactive approach, struggling to keep pace with an ever-evolving threat landscape. It is hard to find time to address long-term objectives or revamp operational processes when you are always engaged in hand-to-hand combat.                  

The impact of AI on the threat landscape will soon make yesterday’s approaches untenable. Cybersecurity vendors are racing to capitalize on buyer interest in AI by supplying solutions that promise to meet the need. But not all AI is created equal, and not all these solutions live up to the widespread hype.  

Do security professionals believe AI will impact their security operations?

Yes! 95% of cybersecurity professionals agree that AI-powered solutions will level up their organization’s defenses.                                                                

Not only is there strong agreement about the ability of AI-powered cybersecurity solutions to improve the speed and efficiency of prevention, detection, response, and recovery, but that agreement is nearly universal, with more than 95% alignment.

This AI-powered future is about much more than generative AI. While generative AI can help accelerate the data retrieval process within threat detection, create quick incident summaries, automate low-level tasks in security operations, and simulate phishing emails and other attack tactics, most of these use cases were ranked lower in their impact to security operations by survey participants.

There are many other types of AI, which can be applied to many other use cases:

Supervised machine learning: Applied more often than any other type of AI in cybersecurity. Trained on attack patterns and historical threat intelligence to recognize known attacks.

Natural language processing (NLP): Applies computational techniques to process and understand human language. It can be used in threat intelligence, incident investigation, and summarization.

Large language models (LLMs): Used in generative AI tools, this type of AI applies deep learning models trained on massively large data sets to understand, summarize, and generate new content. The integrity of the output depends upon the quality of the data on which the AI was trained.

Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies. With the correct models, this AI can use anomaly-based detections to identify all kinds of cyber-attacks, including entirely unknown and novel ones.

What are the areas of cybersecurity AI will impact the most?

Improving threat detection is the #1 area within cybersecurity where AI is expected to have an impact.                                                                                  

The most frequent response to this question, improving threat detection capabilities in general, was top ranked by slightly more than half (57%) of respondents. This suggests security professionals hope that AI will rapidly analyze enormous numbers of validated threats within huge volumes of fast-flowing events and signals. And that it will ultimately prove a boon to front-line security analysts. They are not wrong.

Identifying exploitable vulnerabilities (mentioned by 50% of respondents) is also important. Strengthening vulnerability management by applying AI to continuously monitor the exposed attack surface for risks and high-impact vulnerabilities can give defenders an edge. If it prevents threats from ever reaching the network, AI will have a major downstream impact on incident prevalence and breach risk.

Where will defensive AI have the greatest impact on cybersecurity?

Cloud security (61%), data security (50%), and network security (46%) are the domains where defensive AI is expected to have the greatest impact.        

Respondents selected broader domains over specific technologies. In particular, they chose the areas experiencing a renaissance. Cloud is the future for most organizations,
and the effects of cloud adoption on data and networks are intertwined. All three domains are increasingly central to business operations, impacting everything everywhere.

Responses were remarkably consistent across demographics, geographies, and organization sizes, suggesting that nearly all survey participants are thinking about this similarly—that AI will likely have far-reaching applications across the broadest fields, as well as fewer, more specific applications within narrower categories.

Going forward, it will be paramount for organizations to augment their cloud and SaaS security with AI-powered anomaly detection, as threat actors sharpen their focus on these targets.

How will security teams stop AI-powered threats?            

Most security stakeholders (71%) are confident that AI-powered security solutions are better able to block AI-powered threats than traditional tools.

There is strong agreement that AI-powered solutions will be better at stopping AI-powered threats (71% of respondents are confident in this), and there’s also agreement (66%) that AI-powered solutions will be able to do so automatically. This implies significant faith in the ability of AI to detect threats both precisely and accurately, and also orchestrate the correct response actions.

There is also a high degree of confidence in the ability of security teams to implement and operate AI-powered solutions, with only 30% of respondents expressing doubt. This bodes well for the acceptance of AI-powered solutions, with stakeholders saying they’re prepared for the shift.

On the one hand, it is positive that cybersecurity stakeholders are beginning to understand the terms of this contest—that is, that only AI can be used to fight AI. On the other hand, there are persistent misunderstandings about what AI is, what it can do, and why choosing the right type of AI is so important. Only when those popular misconceptions have become far less widespread can our industry advance its effectiveness.  

To access the full report, click here.

Continue reading
About the author
The Darktrace Community

Blog

Inside the SOC

Connecting the Dots: Darktrace’s Detection of the Exploitation of the ConnectWise ScreenConnect Vulnerabilities

Default blog imageDefault blog image
10
May 2024

Introduction

Across an ever changing cyber landscape, it is common place for threat actors to actively identify and exploit newly discovered vulnerabilities within commonly utilized services and applications. While attackers are likely to prioritize developing exploits for the more severe and global Common Vulnerabilities and Exposures (CVEs), they typically have the most success exploiting known vulnerabilities within the first couple years of disclosure to the public.

Addressing these vulnerabilities in a timely manner reduces the effectiveness of known vulnerabilities, decreasing the pace of malicious actor operations and forcing pursuit of more costly and time-consuming methods, such as zero-day related exploits or attacking software supply chain operations. While actors also develop tools to exploit other vulnerabilities, developing exploits for critical and publicly known vulnerabilities gives actors impactful tools at a low cost they are able to use for quite some time.

Between January and March 2024, the Darktrace Threat Research team investigated one such example that involved indicators of compromise (IoCs) suggesting the exploitation of vulnerabilities in ConnectWise’s remote monitoring and management (RMM) software ScreenConnect.

What are the ConnectWise ScreenConnect vulnerabilities?

CVE-2024-1708 is an authentication bypass vulnerability in ScreenConnect 23.9.7 (and all earlier versions) that, if exploited, would enable an attacker to execute remote code or directly impact confidential information or critical systems. This exploit would pave the way for a second ScreenConnect vunerability, CVE-2024-1709, which allows attackers to directly access confidential information or critical systems [1].

ConnectWise released a patch and automatically updated cloud versions of ScreenConnect 23.9.9, while urging security temas to update on-premise versions immediately [3].

If exploited in conjunction, these vulnerabilities could allow a malicious actor to create new administrative accounts on publicly exposed instances by evading existing security measures. This, in turn, could enable attackers to assume an administrative role and disable security tools, create backdoors, and disrupt RMM processes. Access to an organization’s environment in this manner poses serious risk, potentially leading to significant consequences such as deploying ransomware, as seen in various incidents involving the exploitation of ScreenConnect [2]

Darktrace Coverage of ConnectWise Exploitation

Darktrace’s anomaly-based detection was able to identify evidence of exploitation related to CVE-2024-1708 and CVE-2024-1709 across two distinct timelines; these detections included connectivity with endpoints that were later confirmed to be malicious by multiple open-source intelligence (OSINT) vendors. The activity observed by Darktrace suggests that threat actors were actively exploiting these vulnerabilities across multiple customer environments.

In the cases observed across the Darktrace fleet, Darktrace DETECT™ and Darktrace RESPOND™ were able to work in tandem to pre-emptively identify and contain network compromises from the onset. While Darktrace RESPOND was enabled in most customer environments affected by the ScreenConnect vulnerabilities, in the majority of cases it was configured in Human Confirmation mode. Whilst in Human Confirmation mode, RESPOND will provide recommended actions to mitigate ongoing attacks, but these actions require manual approval from human security teams.

When enabled in autonomous response mode, Darktrace RESPOND will take action automatically, shutting down suspicious activity as soon as it is detected without the need for human intervention. This is the ideal end state for RESPOND as actions can be taken at machine speed, without any delays waiting for user approval.

Looking within the patterns of activity observed by Darktrace , the typical  attack timeline included:

Darktrace observed devices on affected customer networks performing activity indicative of ConnectWise ScreenConnect usage, for example connections over 80 and 8041, connections to screenconnect[.]com, and the use of the user agent “LabTech Agent”. OSINT research suggests that this user agent is an older name for ConnectWise Automate [5] which also includes ScreenConnect as standard [6].

Darktrace DETECT model alert highlighting the use of a remote management tool, namely “screenconnect[.]com”.
Figure 1: Darktrace DETECT model alert highlighting the use of a remote management tool, namely “screenconnect[.]com”.

This activity was typically followed by anomalous connections to the external IP address 108.61.210[.]72 using URIs of the form “/MyUserName_DEVICEHOSTNAME”, as well as additional connections to another external, IP 185.62.58[.]132. Both of these external locations have since been reported as potentially malicious [14], with 185.62.58[.]132 in particular linked to ScreenConnect post-exploitation activity [2].

Figure 2: Darktrace DETECT model alert highlighting the unusual connection to 185.62.58[.]132 via port 8041.
Figure 2: Darktrace DETECT model alert highlighting the unusual connection to 185.62.58[.]132 via port 8041.
Figure 3: Darktrace DETECT model alert highlighting connections to 108.61.210[.]72 using a new user agent and the “/MyUserName_DEVICEHOSTNAME” URI.
Figure 3: Darktrace DETECT model alert highlighting connections to 108.61.210[.]72 using a new user agent and the “/MyUserName_DEVICEHOSTNAME” URI.

Same Exploit, Different Tactics?  

While the majority of instances of ConnectWise ScreenConnect exploitation observed by Darktrace followed the above pattern of activity, Darktrace was able to identify some deviations from this.

In one customer environment, Darktrace’s detection of post-exploitation activity began with the same indicators of ScreenConnect usage, including connections to screenconnect[.]com via port 8041, followed by connections to unusual domains flagged as malicious by OSINT, in this case 116.0.56[.]101 [16] [17]. However, on this deployment Darktrace also observed threat actors downloading a suspicious AnyDesk installer from the endpoint with the URI “hxxp[:]//116.0.56[.]101[:]9191/images/Distribution.exe”.

Figure 4: Darktrace DETECT model alert highlighting the download of an unusual executable file from 116.0.56[.]101.
Figure 4: Darktrace DETECT model alert highlighting the download of an unusual executable file from 116.0.56[.]101.

Further investigation by Darktrace’s Threat Research team revealed that this endpoint was associated with threat actors exploiting CVE-2024-1708 and CVE-2024-1709 [1]. Darktrace was additionally able to identify that, despite the customer being based in the United Kingdom, the file downloaded came from Pakistan. Darktrace recognized that this represented a deviation from the device’s expected pattern of activity and promptly alerted for it, bringing it to the attention of the customer.

Figure 5: External Sites Summary within the Darktrace UI pinpointing the geographic locations of external endpoints, in this case highlighting a file download from Pakistan.
Figure 5: External Sites Summary within the Darktrace UI pinpointing the geographic locations of external endpoints, in this case highlighting a file download from Pakistan.

Darktrace’s Autonomous Response

In this instance, the customer had Darktrace enabled in autonomous response mode and the post-exploitation activity was swiftly contained, preventing the attack from escalating.

As soon as the suspicious AnyDesk download was detected, Darktrace RESPOND applied targeted measures to prevent additional malicious activity. This included blocking connections to 116.0.56[.]101 and “*.56.101”, along with blocking all outgoing traffic from the device. Furthermore, RESPOND enforced a “pattern of life” on the device, restricting its activity to its learned behavior, allowing connections that are considered normal, but blocking any unusual deviations.

Figure 6: Darktrace RESPOND enforcing a “pattern of life” on the offending device after detecting the suspicious AnyDesk download.
Figure 6: Darktrace RESPOND enforcing a “pattern of life” on the offending device after detecting the suspicious AnyDesk download.
Figure 7: Darktrace RESPOND blocking connections to the suspicious endpoint 116.0.56[.]101 and “*.56.101” following the download of the suspicious AnyDesk installer.
Figure 7: Darktrace RESPOND blocking connections to the suspicious endpoint 116.0.56[.]101 and “*.56.101” following the download of the suspicious AnyDesk installer.

The customer was later able to use RESPOND to manually quarantine the offending device, ensuring that all incoming and outgoing traffic to or from the device was prohibited, thus preventing ay further malicious communication or lateral movement attempts.

Figure 8: The actions applied by Darktrace RESPOND in response to the post-exploitation activity related to the ScreenConnect vulnerabilities, including the manually applied “Quarantine device” action.

Conclusion

In the observed cases of the ConnectWise ScreenConnect vulnerabilities being exploited across the Darktrace fleet, Darktrace was able to pre-emptively identify and contain network compromises from the onset, offering vital protection against disruptive cyber-attacks.

While much of the post-exploitation activity observed by Darktrace remained the same across different customer environments, important deviations were also identified suggesting that threat actors may be adapting their tactics, techniques and procedures (TTPs) from campaign to campaign.

While new vulnerabilities will inevitably surface and threat actors will continually look for novel ways to evolve their methods, Darktrace’s Self-Learning AI and behavioral analysis offers organizations full visibility over new or unknown threats. Rather than relying on existing threat intelligence or static lists of “known bads”, Darktrace is able to detect emerging activity based on anomaly and respond to it without latency, safeguarding customer environments whilst causing minimal disruption to business operations.

Credit: Emma Foulger, Principal Cyber Analyst for their contribution to this blog.

Appendices

Darktrace Model Coverage

DETECT Models

Compromise / Agent Beacon (Medium Period)

Compromise / Agent Beacon (Long Period)

Anomalous File / EXE from Rare External Location

Device / New PowerShell User Agent

Anomalous Connection / Powershell to Rare External

Anomalous Connection / New User Agent to IP Without Hostname

User / New Admin Credentials on Client

Device / New User Agent

Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

Anomalous Server Activity / Anomalous External Activity from Critical Network Device

Compromise / Suspicious Request Data

Compliance / Remote Management Tool On Server

Anomalous File / Anomalous Octet Stream (No User Agent)

RESPOND Models

Antigena / Network::External Threat::Antigena Suspicious File Block

Antigena / Network::External Threat::Antigena File then New Outbound Block

Antigena / Network::Significant Anomaly::Antigena Enhanced Monitoring from Client Block

Antigena / Network::Significant Anomaly::Antigena Significant Anomaly from Client Block

Antigena / Network::Significant Anomaly::Antigena Controlled and Model Breach

Antigena / Network::Insider Threat::Antigena Unusual Privileged User Activities Block

Antigena / Network / External Threat / Antigena Suspicious File Pattern of Life Block

Antigena / Network / Insider Threat / Antigena Unusual Privileged User Activities Pattern of Life Block

List of IoCs

IoC - Type - Description + Confidence

185.62.58[.]132 – IP- IP linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

108.61.210[.]72- IP - IP linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

116.0.56[.]101    - IP - IP linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

/MyUserName_ DEVICEHOSTNAME – URI - URI linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

/images/Distribution.exe – URI - URI linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

24780657328783ef50ae0964b23288e68841a421 - SHA1 Filehash - Filehash linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

a21768190f3b9feae33aaef660cb7a83 - MD5 Filehash - Filehash linked with threat actors exploiting CVE-2024-1708 and CVE-2024-17091

MITRE ATT&CK Mapping

Technique – Tactic – ID - Sub-technique of

Web Protocols - COMMAND AND CONTROL - T1071.001 - T1071

Web Services      - RESOURCE DEVELOPMENT - T1583.006 - T1583

Drive-by Compromise - INITIAL ACCESS - T1189 – NA

Ingress Tool Transfer   - COMMAND AND CONTROL - T1105 - NA

Malware - RESOURCE DEVELOPMENT - T1588.001- T1588

Exploitation of Remote Services - LATERAL MOVEMENT - T1210 – NA

PowerShell – EXECUTION - T1059.001 - T1059

Pass the Hash      - DEFENSE EVASION, LATERAL MOVEMENT     - T1550.002 - T1550

Valid Accounts - DEFENSE EVASION, PERSISTENCE, PRIVILEGE ESCALATION, INITIAL ACCESS - T1078 – NA

Man in the Browser – COLLECTION - T1185     - NA

Exploit Public-Facing Application - INITIAL ACCESS - T1190         - NA

Exfiltration Over C2 Channel – EXFILTRATION - T1041 – NA

IP Addresses – RECONNAISSANCE - T1590.005 - T1590

Remote Access Software - COMMAND AND CONTROL - T1219 – NA

Lateral Tool Transfer - LATERAL MOVEMENT - T1570 – NA

Application Layer Protocol - COMMAND AND CONTROL - T1071 – NA

References:

[1] https://unit42.paloaltonetworks.com/connectwise-threat-brief-cve-2024-1708-cve-2024-1709/  

[2] https://www.huntress.com/blog/slashandgrab-screen-connect-post-exploitation-in-the-wild-cve-2024-1709-cve-2024-1708    

[3] https://www.huntress.com/blog/a-catastrophe-for-control-understanding-the-screenconnect-authentication-bypass

[4] https://www.speedguide.net/port.php?port=8041  

[5] https://www.connectwise.com/company/announcements/labtech-now-connectwise-automate

[6] https://www.connectwise.com/solutions/software-for-internal-it/automate

[7] https://www.securityweek.com/slashandgrab-screenconnect-vulnerability-widely-exploited-for-malware-delivery/

[8] https://arcticwolf.com/resources/blog/cve-2024-1709-cve-2024-1708-follow-up-active-exploitation-and-pocs-observed-for-critical-screenconnect-vulnerabilities/https://success.trendmicro.com/dcx/s/solution/000296805?language=en_US&sfdcIFrameOrigin=null

[9] https://www.connectwise.com/company/trust/security-bulletins/connectwise-screenconnect-23.9.8

[10] https://socradar.io/critical-vulnerabilities-in-connectwise-screenconnect-postgresql-jdbc-and-vmware-eap-cve-2024-1597-cve-2024-22245/

[11] https://www.trendmicro.com/en_us/research/24/b/threat-actor-groups-including-black-basta-are-exploiting-recent-.html

[12] https://otx.alienvault.com/indicator/ip/185.62.58.132

[13] https://www.virustotal.com/gui/ip-address/185.62.58.132/community

[14] https://www.virustotal.com/gui/ip-address/108.61.210.72/community

[15] https://otx.alienvault.com/indicator/ip/108.61.210.72

[16] https://www.virustotal.com/gui/ip-address/116.0.56[.]101/community

[17] https://otx.alienvault.com/indicator/ip/116.0.56[.]101

Continue reading
About the author
Justin Torres
Cyber Analyst
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

Iniziare la prova gratuita
Darktrace AI protecting a business from cyber threats.