tags
type
status
date
slug
summary
category
password
icon
- topic:Dealing with privacy infringements as we use AI to prevent crimes
- primary questions:
Situations where AI-aided crime detection/prevention has been put into use or will be put into use?
Specifically, how do risks of privacy leakage occur as AI try to prevent research?
Cases of privacy infringement?
Are there any ways to prevent such leakage? And their pros & cons?
Current laws regarding privacy protection?
[Article]Artificial intelligence and crime: A primer for criminologists
Up to 2019, there have been massive cases of AI-related crimes (though some are only varified in labs). Therefore, the article aims to justify the need for criminologies’ paying attention to AI crimes(AIC).
- Three kinds of AI crimes:
(1) crimes with AI: AI as a tool. Examples: drug traffickers, terrorists, tailored phishing emails, Deepfakes
AI can serve as a potent tool for ‘malicious’ criminal use by expanding and changing the inherent nature of existing threats, or by introducing new threats altogether.
(2) crimes on AI: to attack AI through bugs. Examples: data poisoning
An interesting case:
These same adversarial techniques are also being used in the context of activism and resistance against pervasive surveillance culture(s). In Belgium, researchers designed an adversarial image which, if printed and carried around, rendered a person invisible to AI computer-vision systems.
(3) crimes by AI: AI act autonamously and resulting in violations of law
- Policing uses of AI
law enforcement: tool/partner/usurper
(1) seeing state: scalable, comprehensive, inescapable
integrate easily with extant surveilance infrastructures like cameras, recognizing emotions, body languages, audios and even heartbeats.
First, AI promises (or threatens) the expansion of highly granular digital photography, described succinctly by the ACLU as the ‘dawn of robot surveillance’.Developments in data storage, along with advances in AI-enabled automatic video analytics, can turn passive, scatter shot monitoring into an ever-more granular, comprehensive, and searchable surveillance record.
Some have argued that facial recognition is categorically different from other forms of surveillance and should be subject to an outright ban (Hartzog and Selinger, 2018). The ACLU, likewise, has expressed concerns that such algorithmic systems are untested, discriminatory, and subject to abuse.
‘the plutonium of AI’
maybe ‘AI policing/ surveillance/ supervision’ can be a better topic compared to ‘AI-aided crime prevention’
private corporations can exert influence on the police through their AI products
(2) hidden state: ubiquitous, tacit and deniable
combining with cheap drones and ‘smart city’ sensors.
a single drone overflight of a protest could in principle enable authorities to compile a list of all attendees.
the transision of the governing way: from overt, explicit, coercive to non-intrusive, implicit
GPS chips were embedded in the golf carts, and the flowerbed areas geo-fenced to ensure that golf carts would shut down when approaching the flowerbeds. This example of a fundamental shift to a non-normative regulatory modality for controlling human behaviour, nicely illustrates how, in the future, AI technologies will come to facilitate the sublimation of policing architectures.
(3) oracle state: detection→enforcement, prediction→prevention
prejudice problems, not quite related to my topic
[Article]AI Use in Criminal Matters as Permitted under EU Law and as Needed to Safeguard the Essence of Fundamental Rights
focus: AI legislation and the protection of fundamental rights
fundamental rights: nondiscrimination, protection of personal data, protection to a private life, freedom of expression, fair trial
besides privacy, freedom of expression is also worth considering
- AI is a necessary instrument as information mounts and the world’s complexity increases
- benefits:
- break down obsolete traditional instruments
- make justice a inter-disciplinary system
- management of information
- (author believe) help with the protection of fundamental rights
- Currently, protection of fundamental rights is far from ideal. Thus we need to focus on legislation so that AI would not harm these rights and could hopefully protect them
- AI-evidence
It is expected that as AI shall become a constant component of society it will start to produce content that shall represent evidence.
AI-driven evidence can be a neutral read of reality(coding can be improved) [REALLY?]
- AI-decisions
Formally, the legitimacy of the criminal investigation derives from the accountability of those that investigate.
AI-provided arguments could influence the judge of human
- Legislations
Not very relevant.
[Article]Algorithmic State Violence: Automated Surveillance and Palestinian Dispossession in Hebron’s Old City
A case study as a implication of how AI surveillance can affect people’s fundamental rights adversly
Blue Wolf - a biometric identification system deployed by the Israeli army - functions as a ‘state-sponsored terror’.
“terror capitalism”:
Under surveillance capitalism, protected consumers of digital technologies are coerced into producing data and feeding novel configurations of corporate power.
technology firms’ ability to capitalize on
users’ private information, turning human data into a free source of profit.
civilian technology firms & military forces/government
AI surveillance from government together with technology giants can give rise to a more irresistable governing force
Residents of Hebron say such life under algorithmic surveillance is “a pervasive kind of terror”. Maybe some crazy soilders would shoot them because of the “red mark”, although the algorithm would go wrong from time to time.
Samira explained how the new surveillance system was encroaching on this last domain of autonomy.Soldiers regularly entered without warning to maintain the surveillance system.They act as if we’re not even there.the new technologies rationalized rather than reduced the army’s intrusion on her life.I just don’t feel secure inside my own home. And this was the last place I felt secure.
Blue Wolf changed how residents behave in their own homes, not secure or at ease any more.
Blue Wolf was not just stealing civilians’ personal data, but also the viability of a dignified life itself if they remained in their homes
These dehumanizing forms of policing and self-policing strengthened the Israeli military’s hold on Palestinian life while offering a valuable resource for a burgeoning entrepreneurial class of army trained developers and corporate CEOs.
[Article]Measuring the Acceptability of Facial Recognition Enabled Work Surveillance Cameras in the Public and Private Sector
The paper conducted an emperical study on Canadians’ attitudes(intrusive / acceptable / it depends) towards installing camaras equipped with facial recognition techniques in working areas to prevent theft and monitor performance.
Such camaras (electronic surveillance) is more present and intrusive than before.
The study surveyed more than 3000 Canadian private/public sector employees of different ages, through different ways.
(I skip to the RESULTS, and especially focus on “why it depends”)
- transparency (36% of comments)
regulations on the use of the data collected, knowledge of locations of camaras
keep a record of the usage of video clips
- administrative and safety concerns (34% of comments)
theft prevention is thought of as a legitimate use of such camaras. No performance monitoring
- surveillance and authoritarian concerns (20% of comments)
suspicious of employer’s intention, worries about privacy infringements
“Big Brother watching you all the time”
- some are not against electronic monitoring of performance (6%)
[Article]Privacy and data protection in the surveillance society: The case of the Prüm system
Increase of mobility of persons have posed challenges (like cross-border crimes) to EU. Then we have the Prüm system, aiming to foster the exchange of information (forensic DNA, fingerprints, vehicle registration) more smoothly.
Meanwhile, EU legislation ought to find a balance between surveillance and protection of fundamental rights (privacy, democracy).
“disproportionate surveillance of ordinary citizens”
- Two-step approach to exchange DNA information
In step 1, Member States confirm whether or not the DNA profile of their database matches the profile that exists in another Member State's database. After a hit/match confirmation, this will trigger step 2 if requested, i.e. the exchange of further information about the DNA's owner.
The information exchange is performed in accordance with the data protection law of both countries as well as the minimum criteria established by the European Commission for data protection
similar ideas can be applied to AI surveillance
balance individual rights and collective safety
sociotechnical imaginary: “collectively imagined forms of social life and social order reflected in the design and fulfilment of nation-specific scientific and/or technological projects”
[Webpage]Artificial Intelligence Regulation Threatens Free Expression
AI has raised moral concerns, just as any kind of new tech would
Regulations on AI’s “alignment with human values” can pose a hazard to free speech, as such “human values” are defined by tech giants and governments. It would prohibit AI from saying anything they don’t like but is actually in line with the freedom of expression.
a risk-based approach acknowledges that most AI applications, especially those involving speech and expression, should be considered innocent until proven guilty
The market for AI tools, however, has an important distinction from social media that can encourage greater expression—the absence of network effects.
- 作者:XiaoTianyao
- 链接:https://www.xty27.top/article/ctaw
- 声明:本文采用 CC BY-NC-SA 4.0 许可协议,转载请注明出处。