Insights

Is the growing use of AI actually leading to more human analysis?

Share

Artificial Intelligence (AI) is revolutionising many aspects of technology and society, including the field of Open-Source Intelligence (OSINT). AI tools have helped OSINT analysts become more efficient, automating the more menial aspects of their job, such as note-taking with tools like ChatGPT-powered AI assistant and ChatGPT helping with Google search queries.

However, these advancements also pose significant threats that could undermine the reliability and effectiveness of intelligence. Understanding the key threats and how to counter them through human analysis is crucial for the future of OSINT in an increasingly AI-driven world.

  1. AI-enhanced misinformation and disinformation

AI’s ability to generate and spread false information is one of the most significant threats to OSINT. Advanced AI algorithms can create highly convincing fake news, deepfake videos, and inauthentic social media profiles. In 2019, a manipulated video of Nancy Pelosi appearing drunk went viral on social media, gathering millions of views before social media platforms attempted to take it down. This demonstrates the significant power that AI has on public perception.

This AI-generated content is becoming increasingly difficult to distinguish from genuine information, making it challenging for OSINT practitioners to verify the authenticity of the data they collect. The speed and scale at which AI can disseminate disinformation further exacerbates this problem, potentially overwhelming traditional verification methods.

  1. Information overload

AI-powered tools can produce vast amounts of content quickly and efficiently. This capability, while beneficial in some contexts, can lead to information overload for analysts. The overflow of AI-generated content can bury valuable intelligence under layers of irrelevant or misleading data, making it harder for analysts to identify and extract actionable insights. This automated content generation can also be used to manipulate search engine rankings and social media trends, including Google’s new feature using generative AI to provide a summary of the search conducted, sometimes resulting in false or harmful answers, skewing the visibility of authentic information.

  1. Bias and ethical concerns

AI algorithms are only as good as the data they are trained on, and they can inherit biases present in the data. When applied to OSINT, biased AI can lead to skewed analysis and incorrect conclusions. This bias can be particularly problematic in sensitive or political topics.

Additionally, the use of AI in OSINT raises ethical concerns about surveillance and privacy, as AI tools can process and analyse vast amounts of personal data with minimal oversight.

  1. Decreased transparency and accountability

AI’s decision-making processes are often opaque, making it difficult to understand how conclusions are reached. This lack of transparency can undermine the accountability of OSINT analysts, who may not be able to explain or justify the intelligence derived from AI tools. The opaque nature of AI can also lead to an overreliance on automated systems, potentially damaging analysts’ critical thinking skills.

In 2016, Microsoft launched an AI chatbot, Tay, on X (formerly Twitter), to showcase AI’s capabilities, but it quickly began posting abusive and ignorant messages after being inundated with posts from offensive users. This incident highlighted how easily AI can be manipulated to produce harmful outputs and conclusions from the information it utilises.

How human analysis can mitigate these threats

Human insight and analysis are essential in mitigating these AI-related threats in OSINT. Professional analysts bring contextual understanding and critical thinking, helping to identify and avoid AI-generated misinformation. An analyst’s critical eye also helps counter search engine manipulation, maintaining the visibility of authentic information and safeguarding the integrity of OSINT operations.

Moreover, human oversight ensures ethical judgement and accountability in OSINT operations. Neon’s analysts make daily nuanced decisions about data privacy, consent, and the ethical implications of intelligence gathering. We also ensure transparency by explaining our methods and justifying our analyses to clients.

In conclusion, while AI has made certain aspects of OSINT more efficient, it is essential to be aware of the associated threats. By acknowledging and addressing these challenges, we can work towards a future where AI and human analysis coexist, ensuring the reliability and effectiveness of OSINT operations.

By Klaara Mikola

Related News

error: Content is protected !!