Principal and Head of Security Risk & Resilience Ben Joelson writes in the October issue of ASIS Security Technology about how corporate security teams use Open Source Intelligence (OSINT). These teams proactively monitor social media and other online sources for signs that someone may be targeting their employees, facilities, or operations. Usually, the public only hears about failures to detect online chatter when an attack happens but they rarely hear about day-to-day successes, by design.
There are vast amounts of online content—ranging from videos to chatrooms in the darker corners of the Internet—that are discoverable and aggregable. OSINT tools already use sophisticated search algorithms to sift through thousands of posts. Now, with the advent of artificial intelligence (AI), platforms are harnessing the power of machine learning to improve speed to and accuracy of detection.
Here are a few examples of how machine learning and AI have changed—or are on the cusp of changing—OSINT as a discipline.
First, the ability to convert images and video to natural language—which can be queried like a search engine—now exists. Tools can crawl troves of images and videos to detect guns, weapons, or even client logos.
AI can also be leveraged to correlate seemingly disparate user data sets—at scale and in real-time. Historically, this would have taken an analyst hours of time, particularly for similar names or usernames (i.e., determining that John Smith in Portsmouth isn’t John Smith in Portland). AI and machine learning will make, and on some platforms already are making, these queries a near-real-time exercise, with confidence scores and analysis at the click of a button.
While AI tools provide efficiency, humans in the loop are absolutely necessary. These powerful tools make intelligence professionals better at their tasks but do not replace human judgement.
What does this mean for security stakeholders—particularly in the private sector?
First, it means that duty of care requirements may change. Right now, an ordinary, reasonably prudent company likely monitors for overt threats against assets, employees, or executives. If AI makes this process even easier, and near frictionless, this could signal a sea change for how the legal system will view duty of care and subsequently impose a heightened standard.
In the same way that social media upended how the private sector detects a potential attack, AI will undoubtedly disrupt how intelligence analysts correlate and detect open-source posts. As with any new technology or paradigm shift, the challenge will be avoiding the bleeding edge while ensuring you aren’t left behind.
Read the full article here.
Ben Joelson advises global companies on complex security risks and challenges. Read more about the Security Risk & Resilience practice and download our latest case study.





