Will AI Tools Help Detect Telecom Fraud?
Context:
The Department of Telecommunications (DoT) has started using an artificial intelligence-based facial recognition tool called “Artificial Intelligence and Facial Recognition powered Solution for Telecom SIM Subscriber Verification” or ASTR to weed out widespread instances of fraudulently obtained SIM cards being used across the nation for financial and other cyber scams.
Why is AI used to detect telecom fraud?
- 180,000 SIM cards in India that were reportedly activated using false names were recently blocked by Punjab police.
- 500 connections were determined to have the same person’s photo but varied associated KYC parameters, such as names and address proofs, among these prohibited SIM cards.
- The second-largest telecom ecosystem in the world, with over 1.17 billion subscribers, is found in India.
- It is difficult and time-consuming to manually identify and compare the numerous subscriber verification documents, including photos and proofs.
- The Department of Telecommunications (DoT) in India plans to deploy a facial recognition-based platform named ASTR to address this problem.
- The whole subscriber base of all Indian telecom service providers (TSPs) would be analysed by ASTR, which is referred to as an indigenous and NextGen solution.
- The present text-based analysis techniques are limited in their ability to identify patterns in identity proof and validate the veracity of information.
- These techniques fall short when trying to find comparable faces in photographic data.
- These restrictions are removed and subscriber verification procedures are improved by the ASTR platform, which makes use of facial recognition technology.
- The DoT intends to use ASTR to analyse the enormous subscriber base, making it possible to more effectively and accurately identify false identities.
- By identifying and blocking the abuse of SIM cards obtained through dishonest means, the introduction of ASTR is anticipated to increase security.
- The ability to distinguish faces with comparable features provided by ASTR can help identify instances when the same photo has been used with different supporting KYC parameters.
- ASTR will assist in ensuring the veracity and integrity of subscriber IDs in India’s telecom ecosystem by examining the subscriber bases of all TSPs.
What is ASTR?
- ASTR is used to analyze and identify individuals in a subscriber database provided by Telecommunication Service Providers (TSPs).
- Database sharing: In 2012, the Government of Telecommunications (DoT) ordered TSPs to give the government access to their subscriber records, which may include user images.
- Facial Recognition: The ASTR employs facial recognition technology (FRT) to recognise and map a person’s facial features to produce a digital map of their face. It examines the facial characteristics in the given database.
- Grouping Photographs with Similar Face Features: The ASTR assembles photographs with comparable face features, basically forming groups of individuals who resemble one another.
- Textual Subscriber Details: After that, the system matches the photos in the database with the related textual subscriber details, such as names and other KYC (Know Your Customer) data.
- Fuzzy Logic and String Matching: ASTR employs “fuzzy logic” to find names or KYC information that are very similar in appearance. Instead of demanding rigorous exact matches, fuzzy logic allows for approximate matching. Users who could have tiny differences in their names or other details can be grouped thanks to this.
- Identifying Potential Fraud: In the last phase, ASTR examines the data gathered to identify any instances in which the same face (person) has obtained several SIM cards using various names, dates of birth, bank accounts, address proofs, or other KYC credentials. Additionally, it reveals situations in which a single person has received more than eight SIM connections, which is against DoT regulations.
- Facial Similarity Threshold: The facial similarity threshold used by ASTR’s facial recognition software is a 97.5% match between the 68 mapped facial features of two faces. Whether two faces are deemed to match or not depends on this criterion.
What are the concerns associated with the use of facial recognition AI?
- Facial recognition technology (FRT) worries
- Inaccuracy: Occlusion, poor lighting, facial expression, and ageing can all cause technical mistake
s in FRT algorithms. These elements may cause false positives (misidentifications of a person) or false negatives (failures to identify a person), which can cause misidentifications. - Biassed mistake Rat
es: According to studies, there can be differences in mistake rates in FRT systems depending on the user’s race, gender, and other characteristics. This suggests that there may be possible discrimination because some groups of persons may be more likely to be incorrectly identified or experience higher error rates. - Underrepresentation in Training Data: Since FRT systems rely on training data, biases may be reinforced and less accurate identifications of those groups may arise if some groups are underrepresented in or removed from these datasets.
- Privacy Concerns: FRT systems capture and process biometric facial data, which poses serious privacy issues. The processing of people’s data can be beyond their control, and the widespread use of FRT may result in mass surveillance, thus violating people’s right to privacy.
- Lack of approval: People may not be aware that FRT systems are processing personal data, and they may not have given explicit approval for its usage. This raises concerns about the ability of individuals to govern their personal information and give their informed permission.
- Inaccuracy: Occlusion, poor lighting, facial expression, and ageing can all cause technical mistake
- Automated speech recognition technology (ASTR) worries
- Lack of Public Notification: If there is no public notification accompanying the usage of ASTR, people may not be aware that their speech is being captured and processed. This lack of openness can erode confidence and people’s capacity for making reasoned decisions.
- Data Retention and Safeguards: The lack of details on how ASTR protects data and how long it is kept on file raises questions about data security and privacy. People could not understand how their personal information is secured if there are no explicit rules.
- Privacy and Consent: The use of ASTR on user data without clear notification about its use or express consent can give rise to serious privacy problems. People have a right to know and be in charge of how their speech data is gathered, used, and stored.
- Compliance with Data Protection legislation: It raises concerns about compliance with data protection legislation if ASTR is employed under the theory of presumed consent (implied consent). Individuals may be subject to potential misuse of their personal information in the absence of rules and protections.
What is the legal framework for such tech?
- Absence of Data Protection Law: India did not have a formal data protection law in place as of my knowledge cutoff in September 2021. The government withdrew the Personal Data Protection Bill, 2019, which sought to set comprehensive standards for data protection, after making significant amendments suggested by a Joint Parliamentary Committee. As a result, India currently lacks any specific legislation addressing data protection.
- Lack of FRT-explicit Regulation: India does not have explicit legislation controlling the use of facial recognition technology in addition to not having a data protection law. The special privacy and security issues connected with FRT are not addressed by any legislation or regulations.
- NITI Aayog’s National Strategy on AI: National AI Strategy of NITI Aayog India’s national plan for using the potential of artificial intelligence (AI) has been laid forth in papers issued by the NITI Aayog, a policy think tank of the Indian government. These papers may not be legally binding, but they offer information on how the government views artificial intelligence and related technology.
- Consent and Voluntary Use: Informed permission and voluntary involvement should be the foundation for the usage of FRT, per the NITI Aayog’s approach. This implies that individuals should have the freedom to opt-in or out of FRT systems, and their agreement should be sought before any collection or processing of their facial data.
- Non-Mandatory Nature of FRT: The NITI Aayog’s approach places a strong emphasis on the fact that FRT should not be made mandatory. This suggests that people shouldn’t be forced to engage in facial recognition or have their faces scanned in circumstances in which they don’t want to. FRT’s non-mandatory character guarantees that people have the choice to reject such programmes.
- Public Interest and Constitutional Morality: According to the NITI Aayog’s policy, the employment of FRT should be restricted to situations in which the public interest and constitutional morality coincide. This suggests that FRT deployments should uphold constitutional rights and principles while serving justifiable public interests.