INTRODUCTION
Insider trading, which is the process of buying or selling securities depending on the material, non-public information (MNPI), is one of the most perverse and difficult types of market misbehavior in the financial market. It compromises the idea of fairness of the market, destroys trust between investors and creates a distortion of market price discovery mechanisms, which are crucial to capital markets by allowing an unfair informational advantage to insiders, such as executives, employees, or other third parties connected to the company. Even though this is against the laws of the majority of jurisdictions, including the U.S. Securities Exchange Act of 1934, the UK Criminal Justice Act 1993, and the SEBI (Prohibition of Insider Trading) Regulations in India, it is notoriously difficult to enforce because the crimes are covert and quite often highly sophisticated. Besides being challenging to detect legally, insider trading often times cannot be detected by conventional surveillance methods, which are based on pre-defined regulations, human information processing, and post-factum investigations. The emergence of Artificial Intelligence (AI) is a major change in the regulatory arsenal. Financial institutions and regulators (including the U.S. SEC, the FCA, and ASIC) are starting to increasingly use AI technologies, such as machine learning (ML), natural language processing (NLP), graph analytics, and anomaly detection algorithms to improve insider trading detection and prevention. These systems have the capability to handle large amounts of structured and unstructured data, identify the patterns of trading behind the scenes, and utilize the logs of digital communications and sentiments of news to forecast or retroactively examine illicit conduct. An example of such a surveillance system is the SMARTS surveillance system by Nasdaq, which uses AI to identify market abuse on various trading platforms and jurisdictions through behavioral profiling and trade pattern recognition. Equally, the World-Check One of Refinitiv and the predictive compliance systems of Goldman Sachs are a step to automated and real-time compliance intelligence. These instruments have the theoretical ability to identify insider trading quicker and more dependably than its human-managed counterparts, which does have potential to reduce the damage to the market.
One of the most devious and difficult types of market misconduct in the financial industry has been insider trading, defined as buying or selling securities on a material, non-public information (MNPI). It contravenes the principles of market fairness, destroys investor trust, and bumps the price discovery processes that are fundamental to the capital markets by providing an unfair informational advantage to the insiders, whether they be executives, employees or involved third parties. This is despite being illegal in the laws of most jurisdictions, such as the U.S.Securities Exchange Act of 1934, UK Criminal Justice Act 1993, and the SEBI (Prohibition of Insider Trading) Regulations in India, and because the offenses are secretive and frequently highly complex.
Not only is insider trading a white collar crime that is hard to legally establish, but it is a common type of crime that many classic surveillance methods such as set rules, human control, and proactive investigations fail to detect. The emergence of Artificial Intelligence (AI) is a force that brings a major change in the regulatory toolkit. Regulators (the U.S. SEC and FCA as well as other government bodies in the financial sector) and financial institutions are increasingly utilizing AI technologies, including machine learning (ML), natural language processing (NLP), graph analytics, and anomaly detection algorithms, to further detect and prevent insider trading. These systems are able to handle high amounts of structured and unstructured data, identify concealed trading networks and examine electronic records and news sentiments to either forecast or ex-post facto examine illicit conduct.
An example is the SMARTS surveillance system used by Nasdaq to identify market abuse in various trading platforms and jurisdictions using behavioral profiling and identifying trade patterns with the use of AI. Equally, the World-Check One of Refinitiv and the predictive compliance systems of Goldman Sachs is a step in the direction of automated and real-time compliance intelligence. Such tools have the potential to successfully identify insider trading more quickly and consistently compared to a human-dominated procedure, and this could help reduce the level of damage to the market.
AI TECHNIQUES AND MODELS FOR INSIDER TRADING DETECTION
I. DEEP LEARNING MODELS
Feedforward neural networks (FNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs) and transformers constitute deep learning models and are applied to both structured sequences and unstructured texts. The models are essential in high-dimensional and complex data recognition where the input can be structured information such as numerical trade logs and timestamps or unstructured information such as social media posts and news alerts.
II. Instinctive learning processing.
NLP methods complement this capability through an analysis of unstructured data to identify insights of possible insider behaviors. Sentiment analysis, named entity recognition (NER), and text classification are used to recognize sentiment changes, leakage or manipulative disclosures. Also, NLP may be useful in the task of detecting anomalies in corporate messages, monotonic changes, and outliers in communication in financial news and social media posts through the detection of linguistic anomalies, tone shifts and outliers.
III. Network and graph analysis
Network and Graph Analysis is a useful tool which utilizes the network between various objects, including investors, companies and trading operations, in order to discover suspicious patterns and anomalies. Behavioral graphs, Social Network Analysis (SNA), and Graph Neural Networks (GNNs) can help in discovering coordinated behavior, information diffusion, and collusion. Behavioral graph analytics, especially, trace the history of trader behavior through time, including repeating success of traders around a market moving event, or coordinated trading activity that can suggest collusive insider trading. These techniques usually process structured interaction data and are being extended more to unstructured relational metadata.
IV. Blockchain and Distributed Ledger Technologies (DLT).
The technologies offer irrevocable trade history and real-time audit history, which facilitates safe and open monitoring of structured transactional information (Anirudh S. Krishnan, Blockchain and Securities Regulation, 92 Chi.-Kent L. Rev. 435, 439-41 (2018)). * AI-generated alerts can be more credible with tamper-proofed records of transactions, which will raise compliance in courts and regulatory bodies.
V. Analytics and Cloud computing.
- Big Data Analytics enables the incorporation of large volumes of data from different sources such as trading records, social media, news, and financial reports. This form of integration is useful in detecting complex patterns and abnormalities that can be as a result of insider trading.
- Cloud platforms offer the various regulatory organizations and financial institutions to collaborate with each other by enabling easy access and sharing of data and analyses through a centralized and accessible platform of cloud platforms.
ETHICAL IMPLICATIONS OF USING AI EVIDENCE
Financial surveillance has been transformed through the use of AI and automation as it allows key patterns of indicators of insider trading to be detected quickly in large volumes of data (e.g., trading records, communications, and market data). Despite their ability to improve the detection efficiency, their implementation provokes ethical issues especially when it comes to the privacy of the data, transparency, and misuse.
Black Box Problem: The black box problem in artificial intelligence (AI) is the difficulty of comprehending and explaining in the decision making procedures of AI systems which frequently act in a manner which is neither evident to users or developers. The problem is especially noteworthy in such areas as healthcare, military activities, or law enforcement, and the effects of AI decisions can be extremely severe.Most AI systems, especially deep learning ones, are opaque and, thus, it is hard to comprehend how they determine suspicious activity. The article by EPJ Data Science stresses that the issue of explainability is relevant in the legal field, including court cases.
Ethical Concern with proposed solution
I. Privacy Concerns
Tight laws concerning data protection should be enacted, informed consent also to ensure an anonymization of the data to avoid re-identification.
II. False Positives and Accuracy.
Various training data and frequent validation should be used to improve the accuracy of the AI model in order to reduce false positives.
III. Responsibility and Transparency
AIs systems that are explainable need to be created and human supervision incorporated to facilitate transparent decision-making.
IV. Potential for Misuse
Strong security and codes of ethics must be put in place so that AI is not used to commit crimes.
V. Regulatory Gaps
Regulation frameworks are to be revised in the area of AI-related issues and the globalization of standards.
VI. Ethical AI Design
Consider AI development as a priority, focusing on such ethical principles as fairness, transparency, and value alignment.
LEGAL FRAMEWORK GOVERNING INSIDER TRADING AND USE OF AI
The legal framework governing insider trading in India is primarily regulated by the Securities and Exchange Board of India (SEBI), which aims to ensure market fairness by preventing the misuse of undisclosed, price-sensitive information. Despite the existence of these regulations, insider trading remains a significant challenge in India, with courts often relying on circumstantial evidence to adjudicate the cases. The use of Artificial Intelligence (AI) in the Indian share market is also on the rise; however, the regulatory framework has not fully adapted to address the implications of AI and the evolving technologies. This necessitates a proactive approach by SEBI to regulate these technologies effectively.This article synthesizes key provisions, judicial precedents, and global perspectives to provide a thorough understanding of the framework
1. SEBI (Prohibition of Insider Trading) Regulations, 2015
The SEBI is the primary regulatory body overseeing insider trading in India. It enforces the SEBI (Prohibition of Insider Trading) Regulations, which aim to prevent unfair trading practices by insiders who possess non-public information.
The SEBI (Prohibition of Insider Trading) Regulations, 2015, form the cornerstone of India’s insider trading regulatory framework, aiming to prevent the misuse of UPSI and ensure market integrity and transparency. It introduced significant reforms, including a Structured Digital Database, to enhance compliance and monitor insider trading activities.
Key Provisions
- Definition of UPSI (Regulation 2(1)(n)): UPSI is information relating to a company or its securities that is not generally available and, if made public, could materially affect the price of the securities. Examples include financial results, dividends, changes in capital structure, mergers, acquisitions, and delistings (SEBI Regulations, 2015).
- Definition of Insider (Regulation 2(1)(g)): An insider is any person connected to the company, in possession of UPSI, or with access to it, including immediate relatives, holding companies, senior executives, stock exchange officials, and others.
- Prohibition on Trading (Regulation 3): Insiders are prohibited from trading while in possession of UPSI, except under specific conditions, such as approved trading plans.
- Leakage Prevention: A 2018 amendment (effective 2019) mandates companies to have plans to prevent UPSI leakage and address security lapses.
- Structured Digital Database (SDD) (Regulation 3(5)): Entities handling UPSI must maintain a digital database detailing who shared the information, with whom, and the nature of the UPSI, thereby ensuring traceability.
- Communication of UPSI (Regulation 3(3)): Communication is permissible for legitimate purposes, such as open offers under SEBI’s takeover regulations if it is in the company’s best interest.
- Code of Practices (Regulation 8(1)): Listed entities must publish a code for the fair disclosure of UPSI, ensuring equitable access and transparency.
Relevance to AI
Although the regulations do not explicitly address AI, provisions such as the SDD requirement emphasize traceability, which is critical for AI-generated evidence. AI systems analyzing trading patterns or communications must align with transparency and accountability standards to produce admissible evidence. The black box problem, where AI models lack explainability, poses challenges as courts require clear evidence to prove insider trading violations.
2. SEBI Act, 1992
The SEBI Act, 1992, and subsequent regulations have been pivotal in governing insider trading in India, aiming to ensure market fairness and transparency.The SEBI Act, 1992, laid the foundation for regulating insider trading in India, with the 2015 regulations further refining these measures.
Key Provisions
- Section 11(2)(g): Empowers the SEBI to prohibit insider trading in securities.
- Section 11(2A): Allows SEBI to inspect books, registers, and documents of companies suspected of insider trading.
- Section 12A(d) and (e): Prohibits dealing in securities while in possession of material non-public information (MNPI) and communicating such information.
- Section 15G: Prescribes penalties for insider trading, including fines up to ₹25 crore or three times the profits made, whichever is higher, and imprisonment up to seven years.
- Section 11C: Authorizes SEBI’s Investigating Authority to require the production of documents, including electronic records, and to seize them if they are at risk of destruction.
- Section 15I(2): Allows adjudicating officers to summon evidence, which may include electronic records.
- Section 15U(2): Grants the SAT powers akin to those of a civil court for summoning documents and evidence, including electronic evidence.
Relevance to AI
The SEBI Act provides a broad framework for regulating market abuse and handling evidence, including electronic records generated by AI systems. Although AI is not explicitly mentioned, Sections 11C and 15I(2) enable SEBI to use advanced technologies such as AI for investigations, provided that the evidence meets the legal standards for admissibility.
3. Indian Evidence Act, 1872
The Indian Evidence Act of 1872 governs the admissibility of evidence in Indian courts, including electronic and AI-generated evidence.
Key Provisions
- Section 3: Defines evidence as documents, oral statements, or material objects that make a fact more or less probable. AI-generated outputs, such as flags or reports, qualify as documentary evidence if they are relevant to insider trading.
- Section 45: Allows expert opinions on technical matters, requiring AI outputs to be reliable and explicable.
- Section 65B: Introduced by the IT Act, 2000, this section governs electronic records and their admissibility. A certificate under Section 65B(4) is mandatory, identifying the record, describing its production, and detailing the device involved, and is issued by a responsible official. Exceptions apply if the original device (e.g., a laptop) is produced in court.
4. Information Technology Act, 2000
The IT Act, 2000, provides legal recognition of electronic records and digital signatures, supporting the admissibility of AI-generated evidence.
Key Provisions
- Section 4: Grants legal recognition to electronic records, ensuring that AI outputs are valid if properly authenticated.
- Section 5: Recognizes digital signatures as relevant for securing electronic records.
Relevance to AI
The IT Act supports the use of AI-generated evidence by recognizing electronic records as being legally valid. However, it lacks AI-specific provisions, relying on Section 65B of the Indian Evidence Act’s general framework, which poses challenges for complex AI systems owing to transparency and authentication requirements.
Relevant Case Laws
SEBI v. Kanaiyalal Baldevbhai Patel (2017)
- Context: This Supreme Court case addressed front-running, a form of market manipulation under the SEBI (Prohibition of Fraudulent and Unfair Trade Practices Relating to Securities Market) Regulations, 2003 (PFUTP Regulations), and not insider trading. However, it is relevant for the interpretation of market abuse.
- Key Holdings: The Court held that front-running by non-intermediaries is prohibited under Regulations 3 and 4 of the PFUTP Regulations. Front-running is defined in three forms: tippee trading, self-front-running, and trading ahead. Notably, mens rea (intent) is not required for civil violations, emphasizing objective evidence (SEBI v. Kanaiyalal Baldevbhai Patel, 2017).
- Relevance to AI: Although not directly addressing AI, the case underscores the need for clear, objective evidence in market abuse cases, highlighting the challenges of AI evidence due to potential opacity.
Sahara Group Cases
- Context: The Sahara Group cases, particularly the Sahara India Real Estate Corp. Ltd. v. SEBI (Supreme Court, 2012) involved investor fraud through the illegal issuance of Optionally Fully Convertible Debentures (OFCDs), raising over ₹24,000 crore from over three crore investors without SEBI approval. The Supreme Court ordered refunds with 15% interest (Sahara India Real Estate Corp. Ltd. v. SEBI, 2012).
- Relevance to AI: These cases focus on fraud rather than insider trading; however, they highlight SEBI’s enforcement powers under Section 11 of the SEBI Act, 1992, and the need for robust evidence. AI-generated evidence can be used in similar investigations, provided that it meets Section 65B requirements.
Comparative Insights: SEC (US) and FCA (UK)
US Securities and Exchange Commission (SEC)
- Insider Trading Regulations: The SEC regulates insider trading under Rule 10b-5 of the Securities Exchange Act of 1934, prohibiting trading on material non-public information (MNPI). Penalties include fines, profit disgorgement, and imprisonment for up to seven years.
- AI Surveillance: The SEC uses AI for market surveillance and analyzes trading patterns and communications to detect insider trading. The Proposed Predictive Analytics Rules (2023) require broker-dealers and investment advisers to eliminate or neutralize conflicts of interest from AI use in investor interactions, addressing risks such as the misuse of MNPI (Skadden, 2024).
- Admissibility of AI Evidence: The US Federal Rules of Evidence (FRE) govern admissibility, with FRE 702 requiring reliable expert testimony and FRE 901 requiring authentication. The SEC emphasizes transparency to address the black box problem and ensure that AI outputs are defensible in court.
- Key Difference: The SEC is more proactive in proposing AI-specific rules, unlike SEBI’s reliance on general provisions, offering a model for India to develop targeted guidelines.
UK Financial Conduct Authority (FCA)
- FCA: The FCA enforces insider dealing under the Market Abuse Regulation (MAR) and the Criminal Justice Act 1993, requiring firms to monitor trading and communications for market abuse.
- AI Surveillance: The FCA actively explores AI through initiatives such as the AI Lab and Market Abuse Surveillance TechSprint, testing AI for detecting insider dealing and market manipulation. It relies on existing frameworks, such as the Senior Managers and Certification Regime and Consumer Duty for enforcement (FCA AI Update, 2024).
- Admissibility of AI Evidence: The UK’s Evidence Act of 1995 governs electronic evidence, requiring reliability and authenticity. The FCA’s principles-based approach ensures that AI outputs are scrutinized for transparency, similar to India’s Section 65B requirements.
- Key Difference: The FCA’s principles-based approach and AI testing initiatives provide a flexible framework, contrasting with India’s more rigid statutory requirements under Section 65B.
Comparison with India
- Regulatory Approach: India’s SEBI relies on general provisions for electronic evidence and lacks AI-specific guidelines. The SEC’s proposed AI rules and the FCA’s AI Lab initiatives are more advanced, focusing on conflict and innovation, respectively.
- Evidence Admissibility: All three jurisdictions require transparency and authentication for electronic evidence, but the US and UK have more developed case law on technical evidence, offering clearer standards than India’s evolving jurisprudence.
- Lessons for India: SEBI could adopt the SEC’s proactive AI regulation and the FCA’s principles-based approach to develop guidelines that address transparency, accountability, and bias in AI surveillance.
LEGAL ADMISSIBILITY OF AI GENERATED EVIDENCE IN INSIDER TRADING CASES
It is likely that AI-generated flags, reports, and alerts would be considered evidence in any insider trading case in India, but it must pass the legal test of admissibility under the Indian Evidence Act, 1872, and must be consistent with the regulatory requirements of SEBI. The provisions listed below are important.
- Indian Evidence Act, 1972, Section 3: The act has defined evidence as a document, oral statement or material object that has been created to be looked into in court and which renders a fact more or less likely. A report of suspicious trading patterns created by AI can be said to be documentary evidence as long as such information is relevant to establishing insider trading.
- The admissibility of electronic records is regulated under Section 65B of the Indian Evidence Act, 1872 in which a certificate is required to establish the integrity and origin of the records. AI products are electronic records and therefore must adhere to this provision to be admissible in court.
- SEBI (Prohibition of Insider Trading) Regulations, 2015: The Regulation 3 is the prohibition of the trading of the unpublished price-sensitive information, and the Regulation 7 is the obligation to be disclosed to back the investigations. AI results can be taken as evidence in case they prove violations, including trading prior to price-sensitive announcements.
- Section 45, Indian Evidence Act, 1872: Admission of expert opinions in technical issues (including AI outputs) should be founded on reliable methods and must be intelligible to the court.
Challenges with admissibility of AI generated evidence
The issue of transparency and explainability of the legal admissibility of AI evidence, especially in the field of insider trading, is inherent in the so-called black box of many AI models. AI models do not always have obvious reasoning, and it is hard to understand the process of decision-making.
I. Juridical Demand of transparency.
Challenge: Indian courts, SAT and Supreme Court, stress the transparency and openness of the evidence to scrutiny in order to be reliable and just. Nonetheless, deep learning models tend not to provide an explanation of the reasons why a certain trade is considered suspicious because their decision model entails complex and non-linear interactions in neural networks. This uncertainty is what is referred to as the black box problem and makes the work of prosecutors in meeting the judicial reliability criteria difficult.
2 . Fairness and Due ProcessChallenge: The Indian Constitution (Article 21) and the Indian evidence Act also provide dueprocess, which allows the defendants the opportunity to argue against the evidence presented against them. Nevertheless, in cases where the AI provides ambiguous results, the defendants find it challenging to refute such evidence because they cannot retrieve or understand the logic behind the model decision. This contradicts the justice principles which are emphasized by the EPJ Data Science article that emphasizes that there should be a sense of explainability within the legal scenarios.
3. Risk of Bias and Errors
Challenge: Courts scrutinize evidence to assess that there are biases or flaws that can wrongly charge AI-generated output. In insider trading, the SEBI should be assured that AI output does not contain systemic flaws that have the potential of misidentifying legitimate trades. The black box issue causes the fact that it is impossible to ascertain whether an AI model contains biases (e.g. due to unbalanced training data) or flaws, thus it may be hard to check its output. Courts cannot determine the accuracy or fairness of a model without being certain about it.
4. Judicial Doubts about Technical Evidence
Challenge: Indian courts, the SAT included, are wary of technical evidence, as is evidenced in the SEBI v. In Rakhi Trading Pvt. Ltd. (Supreme Court, 2018) having clear and verifiable evidence to establish market manipulation is necessary. AI evidence is new and complicated and is under increased scrutiny.
5. Chain of Custody
Challenge: The chain of custody is everything concerned with maintaining a record of the evidence that ensures that it remains untouched once it has been produced until the time it is presented in court. Nonetheless, AI products are electronic records and can be quickly altered, corrupted, or manipulated without permission.The multitude of servers, cloud data, or third-party providers is complex to maintain a continuous chain of custody when performing forensic operations.
6. Authentication
Challenge: Section 65B stipulates that electronic evidence must be proved to be genuine and of reliable source. In the case of the AI-generated products, it refers to the possibility of demonstrating that the configuration of the system, the training data, and its overall operation is what is said. But it is not easy to test such complicated systems. Because of proprietary interests, developers usually maintain secret important information such as the underlying algorithms or training data, making it difficult to ensure the integrity of the system. Furthermore, the lack of standardized and clear procedures of the verification of AI systems, particularly, at various levels, such as data processing, model training, and the involvement of a third-party vendor, is another element of uncertainty.
RECOMMENDATION
Since AI technologies are being used as a more fundamental part of insider trading detection, it is important to define the rules of their admissibility to prevent the lack of justice and the fairness of the procedure.
To reinforce the legality of the evidence produced by AI in the context of insider trading and to add to the ethical surveillance of AI within the legal framework in India, it is possible to make a number of recommendations based on the study of the existing legal systems and issues.
1. Enhance the Laws of AI Evidence Admissibility.
Legal requirements in the Indian Evidence Act, 1872, to meet the demands of AI-specific issues should be strengthened to guarantee the admissibility of AI-generated evidence (flags, reports, alerts) in insider trading cases.
- Recommendations:
Modify Indian Evidence Act: There are certain provisions that should be changed so that AI-generated evidence can be included, and what should be done in order to maintain transparency, explainability, and authentication should be mentioned. One of them is an amendment of the Section 65B to include the measures of certifying AI outputs, where algorithms, training data and decision paths are documented.
Mandate Explainability: AI in insider trading surveillance should rely on interpretable models (e.g. decision trees, hybrid systems) to address the black box problem, and hence be able to meet the needs of Section 45 of trustworthy expert testimony.
Corroborative Evidence: Courts and and the Securities Appellate Tribunal (SAT) need to be advised to take AI evidence, which is backed by classical evidence (e.g., communications, financial records) to boost admissibility, as observed in SEBI v. Rakhi Trading Pvt. Ltd. (Supreme Court, 2018).
Judicial instructions: There should also exist guidelines governing the manner in which the AI evidence is to be evaluated, which the judges and SAT members, to ensure consistency in their decisions, and transparency should be the most effective in the detection of insider trading. – Risk and Legal Regulation of Algorithms Application in Insider trading supervision. (2023).
Disclosure Obligations: It applies the disclosure requirement to commercial entities, where the AI-generated evidence should be disclosed and the courts will presume causality in non-compliance scenarios.
Harmless of Rights: In constructing legal solutions, especially in the data privacy and security, give priority to the safety of personal rights and interests and prevent the risks of these harms.Risk and Legal Regulation of Algorithm Application in Insider Trading Supervision.
- Impact:
Enhanced standards will result in a higher success rate of prosecution with AI evidence and provide fairness, as recommended in Article 21.Although these recommendations are aimed at optimizing the admissibility of AI evidence, the risk of relying on AI systems and the fairness concern the due process should also be taken into consideration. A balance between innovation and legal protection will play a pivotal role in the changing environment of the insider trading laws.
2. Introduce AI Audits and Accountability SEBI Guidelines.
The implementation of SEBI regulations of AI audit and accountability in insider trading detection is paramount to increase regulatory compliance and effectiveness of fraud detection. Through the means of the AI, the SEBI can also guarantee a more efficient monitoring of the trading practices and therefore enhance the integrity of the market.
- Recommendations:
AI Audit Requirements: AI systems need to be audited by the audit committee to ensure that they are accurate, fair and in accordance with the law of data protection. Training statistics, algorithmic biases, and reliability of the output should be audited and ensure that such output is in line with 65B (authentication) and Article 14 (non-discrimination) of the law.Accountability Systems: AI developers, financial organizations, and SEBI authorities should be outlined and established with their accountability to make mistakes, biases, or data breaches. Auditability also requires the firms to keep records of AI decision-making processes.
Transparency Standards: Require the use of explainable AI models to correct the black-box problem and make the outputs defendable in court and comply with Section 45 (EPJ Data Science).
Data Protection Compliance: AI systems are expected to adhere to requirements of the principles of the DPDP Act (e.g., consent, data minimization, and security) to defend privacy rights in the provisions of Article 21.
Reporting Framework: A reporting system should be put in place to ensure that firms report AI surveillance practices to the SEBI along with the audit findings, and compliance mechanisms to increase regulation.
Additionally, SEBI ought to establish a framework that integrates AI-based compliance checks so that it can be easy to monitor and assess risks on a real-time basis.
Ethical Concerns: The issues of algorithmic bias and the need to make the AI decision-making process more transparent are the key factors to upholding trust among the stakeholders. Ethical issues can be addressed by training auditors and stakeholders on AI technologies to increase the effectiveness of audits.
- Impact:
Concrete directions will help increase the credibility and admissibility of AI-related evidence, reduce judicial cynicism, and ensure ethical standards, which will increase the enforcement authority of SEBI. Although AI implementation in auditing has got several advantages, it has its fair share of problems such as data quality and algorithmic biasness. Therefore, a fair methodology where innovation and ethical governance are emphasized is the key to successful application of AI in insider trading.
3. Suggestions of Corporate Internal Policy
The successful application of the AI surveillance in insider trading requires the organization to have a comprehensive internal policy framework such as the establishment of an AI Ethics Board. Such a board would monitor the ethics of the introduction of AI, so that the surveillance activities do not contradict the corporate governance and ethical standards of the company.
- Recommendations:
Create AI Ethics Boards: This is to create boards dedicated to AI development and deployment that include representatives of compliance, legal, IT and data protection teams. The board should audit AI systems on whether they have ethical risks (i.e. privacy breach or bias) and are consistent with the DPDP Act principles (Sections 5-6).
Privacy Protection: Requires consent-based processing of data, minimization of data and anonymization, according to the DPDP Act Section 6.
Fairness: Various training statistics and periodic bias audits have to be used to avoid biased results, which is in accordance with Article 14.
Transparency: AI models that are explainable are needed so that the results can be examined and uphold admissibility in accordance with Section 45 and fairness in accordance with Article 21.
Responsibility: Decision Making: Establish a process on how to resolve AI errors with which compliance officers and DPOs can have an escalation route.
Likewise, frequent training: to be able to use AI responsibly, employees are to be trained about the ethics and privacy laws, as well as the SEBI regulations.
Audit and Reporting: Internal audit of the AI systems and inform SEBI about the findings in order to provide transparency and adhere to compliance.
- Impact:
Privacy and fairness risks will be reduced to a minimum, AI evidence credibility will increase, and people will develop trust towards corporate surveillance. Although these suggestions are aimed at the improvement of corporate governance via responsible AI application, there is an opposite point of view concerning the threat of overreaching and breaching privacy, which shows the necessity to balance surveillance activities.
4.Interdisciplinary Surveillance
AI surveillance under the insider trading market is growing more and more in need of interdisciplinary oversight as financial markets are becoming intricate and technology-dominated. This blend of the law and technology is able to develop the regulatory frameworks, so that the AI systems employed in the market monitoring are efficient and do not break the law. Such oversight is capable of reducing the risks of algorithmic trading and improving the financial market integrity.
- Recommendations:
Interdisciplinary Oversight Committees: To audit AI systems to ensure that they are legally and ethically sound, interdisciplinary Oversight Committees should be developed at SEBI and financial companies, which are composed of lawyers, data scientists, compliance officers, and DPOs. The existing legislation simply does not have sufficient tools to cope with the AI-related trading complexities, as the traditional liability principles are not sufficient to address the black box problem of the AI systems.
Legal Expertise: Legal scholars must make sure that AI evidence complies with standards in the Indian Evidence Act (Sections 3, 45, 65B) and constitutional safeguards (Articles 14, 21), on admissibility and fairness.
Technical Expertise: Data scientists are to evaluate AI models in three aspects: accuracy, bias, and explainability by alleviating the black box problem and ensuring high-quality output.A combination of technology and the law can lead to the development of strong supervisory technologies (SupTech) to facilitate market monitoring and identify manipulative action.
AI-Crime Risks: The fact that AI can be used to facilitate crimes requires a preemptive regulation of AI, balancing innovation and ethical values.
Human Oversight: AI systems need to be crafted in such a way that humans are able to intervene so that legal standards are honored and individual rights are not violated. Etzioni, A., & Etzioni, O.
- Impact:
Interdisciplinary regulation can address gaps in the law and technology by making AI evidence admissible, ethical, and regulation and constitutional compliant. Though, excessively strict laws can suppress innovation in the field of AI technology, thus, it will probably slow the progress that can be used to increase efficiency and transparency in the market.
Conclusion
To increase the admissibility and ethical application of AI generated evidence in insider trading cases in India, it is necessary to strengthen legal standards, introduce SEBI guidelines, create and enforce corporate AI policies, and develop interdisciplinary oversight. The recommendations involve black box problems, judicial skepticism, and regulatory gaps to ensure that there is compliance with SEBI ( Prohibition of Insider Trading ) Regulations, 2015, the Indian Evidence Act, 1872, and the DPDP Act, 2023. As a result, the explainable AI, solid auditing, internal ethics boards, and collaborative oversight can help SEBI and financial organizations to strike a balance between surveillance and privacy and fairness, contributing to effective enforcement and protecting the basic rights.
GRAY AREAS
Insider Trading in the Age of Social Media: Can AI Monitor Telegram, WhatsApp, and X (Twitter)?
The apps like Telegram, WhatsApp, and X (previously Twitter) offer users a semi-anonymized, decentralized and frequently encrypted space (when compared to the traditional corporate communication tools such as internal email systems or Bloomberg terminals) and this factor allows sharing sensitive tips more discreetly.
To regulators and stock exchanges, this change is a paradox. The artificial intelligence systems are able to filter through large volumes of data that can give an idea on insider trading practices. However, the casual and the multilingual nature of social media conversations and advanced concealment techniques make it much harder to identify genuine violations from harmless speculation.
AI tools using Natural Language Processing (NLP) and machine learning are increasingly capable of detecting signs of market abuse hidden under the chaotic environment of social media. Even amid slang and coded chatter, advanced transformer-based models can identify sentiment shifts, coordinated activity, and unusual patterns on platforms like Twitter, Telegram, and WhatsApp, often revealing early traces of pump-and-dump schemes or insider leaks.
The potential of AI thus lies in identifying coded language, slang, and even memes that may contain references to non-public material information.
Challenges
Although AI has become increasingly promising in monitoring digital communication and to identify insider trading, a number of significant issues arise especially those associated with privacy, semantic ambiguity, and scale. The critical obstacles that should be overcome to ensure the responsible and successful implementation are the following.
1. Encrypted Platforms
End-to-end encrypted platforms, such as WhatsApp and Signal, have made surveillance difficult. Since the content of messages cannot be read by third parties, including the owners of a platform, AI models cannot analyze the real communication data except when this access is explicitly provided by the users or regulators.This creates a black box scenario where the actual content of communication is obscured and only limited metadata such as timestamps or patterns of communication is available which in most instances cannot be freely accessed without permission of the law.
2. False Positives
AI faces the problem of context identification. Social media is full of slangs, memes, and sarcasm which cannot be interpreted literally. A tweet that reads $TSLA to the moon may be a joke or a hype, but to a computer program that is scanning through the market-moving statements, it may seem like a big tip-off.
Misinterpretations can lead to false positives, flagging innocent users while missing real offenders. Too many errors of this kind could undermine trust in AI surveillance itself.
3. Volume and Scalability
Social media platforms generate billions of posts, messages, and multimedia files daily. Monitoring such data volume requires a highly scalable AI infrastructure combined with intelligent filtering and prioritization mechanisms.The problem aggravates when the messages are multilingual or encoded in local dialects.
Unsupervised learning and anomaly detection techniques, rather than rule-based systems, are increasingly necessary to sift through noise and detect novel or emerging fraud strategies.
When AI misses the Signal : Case studies
A series of WhatsApp group messages leaked the earnings and M&A activity of listed companies before SEBI announcements. Though India operates on AI based trade surveillance, none of these messages were raised until human whistleblowers revealed the leak.The reason behind the failure was twofold: the messages were authored using Hinglish and distributed as images and PDF files, which the text-based AI could not work with; and, secondly, they were encrypted so that the regulators could not access the content. The case highlights the vulnerability of AI surveillance to the use of encryption and informal communication which might render the technology ineffective.
Elon Musk’s tweet indicating his intention to take Tesla private for $420 per share, accompanied by the assertion “funding secured” which resulted in an immediate 10% surge in the company’s stock price.
However, AI-driven market monitoring systems failed to flag the post as suspicious until after the price spike. The lapse on the part of AI was primarily due to three factors: (i) the tweet was phrased as a casual, ambiguous first-person statement rather than a formal disclosure, so NLP sentiment and materiality filters scored it below risk thresholds; (ii) the models had not been trained on CEO social-media posts as primary disclosure channels, lacking labelled examples of comparable “take-private” tweets; and (iii) real-time ingestion latency—the tweet was processed as ordinary user-generated content.
Together, these incidents expose AI’s blind spots—context, language, and format—that allow insider information to slip through the cracks
4. Ethics and Legal Constraints
In some jurisdictions, such as the European Union, the General Data Protection Regulation (GDPR) imposes tight restrictions on data gathering, which means that users have to explicitly consent to it and that the use of this data should be confined to clearly defined purposes. On the same note, California Consumer Privacy Act (CCPA) also allows users access,delete, or block the sharing of their personal information- which makes mass behavioral surveillance legally complicated.
In India however the situation is still evolving.The Supreme Court in Justice K.S. Puttaswamy v. Union of India recognized the right to privacy as a fundamental right under Article 21 of the Constitution. Additionally, the Information Technology Act, 2000, along with its Intermediary Guidelines (2021), places restrictions on how intermediaries can collect and disseminate user data. Surveillance of any private or encrypted chats without authorization may constitute violation of Section 43A and amounts to unauthorized profiling. Pervasive monitoring, however well-grounded in the rationalization of insider trading detection, may have a chilling effect on speech.This is especially worrying when the AI systems falsely label innocuous comments as suspicious. When uninhibited gathering of data is given the green light through algorithmic governance we find ourselves giving up the power that belongs to individuals and nullifying the safeguards to which our constitution entitles us.
