On the 2nd of October 2010 the Instutute of Criminology on the Law Faculty, University of Ljubljana held an international colloquium titled Living in (Un)Regulated Surveillance Society?.

Read below for my notes.

Part 1

Introduction of the COST Action "Living in the Surveillance Societies" research programme; Revisiting the Camera Revolution in the UK (William Webster, University of Stirling)

What is LiSS [Living in Surveilling Societies] – check http://www.liss-cost.co.uk Revisiting the Camera Revolution in the UK: Originally CCTV to fight crime; now used also for smaller unacceptable behaviour. UK government was very pro CCTV, the public isn't informed enough about CCTV and its implications (yet). [myths_vs_reality_table] In the UK it's very popular to base political decisions on a certain "fact base". One could simply challenge the fact base and then the legitimacy and base of the political decision and/or act has been undermined. In CCTV's case the results that should have happened according to the "fact base" have not been met and the the "fact base" has already been disproved. [photo slideshow of diverse CCTV cameras and signs] The capabilities of the cameras and what is being surveilled cannot be judged by its appearance. Interesting fact is also that cameras are mostly out of reach and/or hidden, which raises the suspection of their presence and reason and what our presumed attitude is about it.

Regulatory options for surveillance societies (Charles Raab, University of Edinburgh)

What is wrong with regulation?

A common problem with regulation of (new) technologies is that it is reactive – this goes as well for surveillence. Also it very narrowly interpretes what is personal privacy – it concentrates only on the aspect of the individual and not the broader social implications. Protecting privacy is hard and costly; also the often the governments and even more so the businesses have more interest in surveillence then in protecting privacy. Media is very ambivalent – either Orwell or security by CCTV.

Apart from international and national laws, there is also the privacy advocates and privacy groups that somehow regulate surveilence and privacy. The problem is that all these (especially latter) tools are quite weak because of fragmentation.

"Fair information principles" for data protection is a concept of (self-)regulation. The collected data should be accountable; collected, used and disclosed for only necessary purposes which should also be openly publicised and debated. The problem is that in legislation this is very weakly implemented.

Gary T. Marx – "Ethics for the New Surveillance" (1998) Half of Marx' issues that have to be solved for "new surveillance" are already implemented in data protection/privacy laws, yet these are only the very basics and a start. We have yet to include privacy and surveillance impact assessment to start going uphill. The implications of data collection nad surveillance are broader then just individual breaches of privacy. Regulatory agencies should have more power and resources, more sanctions available and greater influence over government and business. They also need to better cooperate internationally and be more open to the public. Currently a lot of regulatory agencies also do not have (yet) internal technological awareness.

Also we need global standardisation and rething "privacy" and "public interest" – in some cases it may very well be that those two are identical. For this to happen public debate is also needed.

IT and surveillance in a BROAD [Broadening the Range Of Awareness in Data protection] sense (Ivan Szekely, OSA Archivum, BME)

There are different aspects of surveillance – criminological, legal, social and even as an indicator of power relationships. In this presentation we will concentrate from the PoV as a factor influencing peoples autonomy.

What do IT professionals think? If code is law, then coders are legislators. The problem is that when deciding on data retention and privacy protection laws we do not ask IT professionals.

BROAD project on internet – http://www.broad-project.eu

Some general findings from the face-to-face interviews of IT professionals in Hungary and Netherlands:

  • most didn't have a formal IT education
  • confused about data protection and data security
  • they thought that data protection laws doesn't have and inpact on their work
  • about the retention of users' data they in felt that "it's the way the world works nowadays"
  • but about surveilling employees they feel it shouldn't be done
  • also they tollerate surveillance by the state

Online survey findings:

  • cca. 75% indicated that there have been explicit discussions about what personal data should be collected and used
  • cca. 98% said if they they disagreed they made it clear
  • but still cca. 75% would still carry out the data collection although they were against it
  • they felt more concerned about privacy and data protection then their superiors, subordinates, team mates and project leaders
  • they felt in general users are not aware of the implications of data collection
  • only 50% were confident they could protect themselves from data collection
  • considering data protection rights is independant of knowledge, nationaliy, gender, age, employment, size of organisation, profession etc.

Findings of project:

  • IT pros have principles on data processing
  • there is only a weak correlation of privacy concerns reflected in people's actually online behavior and work
  • there are no correlation between belonging to a certain group and one's feelings about data collection/privacy
  • although this was a pretty big study project, we still cannot judge what IT professionals think about data protection

Post festum comment of one of the participants: the research was a fail, from the questions and its number to the sample taken.

New directions in the comparative studies of privacy policy regime(s) (Karl Löfgren, Roskilde University)

How can we improve comparative studies on privacy policy regulation? Are top-down regulations and studies really the best way? Is the EU and other transnational organizations really the best regulators and PoV? Shouldn't we consider bottom-up studies more? Shouldn't we consider also questions before and/our outside the politics? How stable are the privacy regulation systems? What are the different forms of communities/nretworks, what are their forming and guiding policies and privacy regimes? How democratic and legitimate are they? We should ask ourselves is privacy from the philosophical PoV. In the literature it's also missing that different nations have different feelings towards privacy and surveillance. Also missing is the question of the elite's influence on surveillance. There should be a map of power structures.

Q&A Session 1

Q: Why did CCTV sprout so much in the UK?

Webster: Initially there were no general public ideas on CCTV, but there was great governmental support and interest of business and police. A general feeling accross EU is that other nations are now just implementing the UK system into their own legal system. Hopefully this will not happen. Raab: It usually takes a very powerful incident (e.g. kidnapping of a child) captured on CCTV and media amplification, to raise public consent on using CCTV. Webster: Because the central government offered a lot of funding for local CCTV, the local governments jumped the bandwagon because their funding was low and they saw a potential to finally show a change.

Q: Did the crime rates actually drop after CCTV was introduced?

Webster: There's many studies and most are anecdotical. The quality studies and evidence shows that it's effective in closed environments, but not elsewhere. Raab: A great study shows that CCTV doesn't work and now concentrates on how to make it work.

Q: I observe that public policy bases on what they think or would rather think is the public opinion. Raab: This imaginary public opinion is a big problem not only in privacy, but also elsewhere.

Part 2

[video about privacy: "Pizza palace"]

We have to take into account two scenarios – one when a serious crime takes place and the other when the employer surveills its empoyees. Although it can be in employer's interest to surveill their employees because the employer is often liable for the actions of its employees and also wants to make sure they work properly, increased surveillance actually hinders productivity.

There are four spheres of privacy (in the work place). The american model claims that privacy in the work place is only a matter of contract between the employee and the employer (basically the employer forces their privacy policy upon the employee). It is important to maintain an atmosphere of trust in the work space.

Does Blocking of the Illegal Internet Content Really Protect Citizens? (Katalin Parti, Hungarian National Institute of Criminology)

Government-side internet blocking is relatively new, but has its roots already cca. 10-15 years ago. The most notorious is the Chinese great firewall, but recently even some EU member states have started using it – e.g. France, Germany.

Levels of internet blocking:

  • user level: firewalls, spam filters, soft flitering
  • hardcore filtering:

    • ISP level
    • institutional level
    • government level

Government level blocking can be carried out by filtering IP, DNS blocking, search-engine keyword or URL-keyword based blocking.

Critical issues of government-based blocking:

  • Overblocking and underfiltering at the same time – by using some methods (e.g. IP or DNS blocking) cannot separate between legal and illegal content, thus some illegal content is still available while some legal ones are blocked.
  • If only mass ISP are obliged to block, the users can just chose a smaller ISP or use proxy servers to access blocked content.
  • The technique of blocking can undermine the final aim of it – the final aim is that users should be saved from accessing illegal and harmful content, but by blocking them it only makes these not (or rather harder) to access, but it doesn't remove them. The ideal solution would be to erase the content on host servers.
  • Blocking measures do not embrace Web 2.0 content – child sexual abuse is mainly hosted on personal computers via P2P networks, chatrooms, closed e-mail groups etc.
  • Necessarity-proportionality principle fails – the necessity of blocking does not outweight the breach of privacy, freedom of speech etc.
  • Fair play principle fails – black lists are collected and updated outside the legislating body, and are thus without the checks and balances it brings.
  • Constitutional right of freedom of speech fails.
  • Legal awarness does not improve – black lists are often not openly published, so the citizens are not aware of what is legal and what is not.
  • ISP obligations are not clear.
  • Definition of "content provider" is not clear.

Blocking of illegal internet content does not really protect citizens.

The best way would be to raise awareness of the public and leave the blocking on user level.

Bio-privacy (Charlotte Bagger Tranberg, Faculty of Social Sciences, Aalborg University)

The main problem of biometrics is that the the person always carries around their identifier and that if it is collected it can always be used to identify that person.

Biometric data can only be collected by certain rules – necessity and consent/proportionality of the collection.

Interesting Danish case: Crazy Daisy (a chain of discotheques in Denmark) – they wanted to use fingerprints as the only method for identity and managing numbers of people in the disco. Oddly enough it was approved. It's quite questionable whether going out with a group of your friends (peer pressure) late in the evening and after drinking a few beers (or more) can be deemed as consent of free will.

Interesting Swedish "school" case: a school wanted to check school children by their biometric data. Three courts had three different opinions on this case. Which only makes it more apparent that biometric data collection is quite problematic.

Q&A Session 2

Q: In the Crazy Daisy case was there any checking whether they consented in a drunken state? Tranberg: No, and this is a big problem. Of course being in a drunken or drugged state cannot be deemed as consent.

Q: Your proposal to have a hotline to report illegal content – should that hotline be owned by the ISP? Parti: [missing a part] This should be a soft solution. In Slovenia we already have such a hotline on http://www.spletno-oko.si, but it has its problems.

Summary (Katja Šugman Stubbs)

The bottom line, it seems all of us here on this colloquy are very concerned about privacy. Though the general public though – even emongst fellow lawyers – is that the surveillance makes them feel safer and that they have nothing to hide, so it's OK. The problem is that many of the privacy issues are also often caused by new technology, which is hip, which is practical and easy to use. The question remains though whether with the means of surveillance we actually achieve its goals (and often it is not so). Is it becoming instead of crime prevention a general control, where everything is surveilled except those who can afford to make sure they are not surveilled – isn't that counterproductive? Law is only the front courtain of what is going on behind it – e.g. the EP has twice rejected surveillance of ICT, but under pressure it gave up the third time it was suggested and accepted it.

To conclude, we have to reduce fear in order to preduct privacy.

Side notes: I saw one of the participants, who was in late mid-years, to have FreeNet installed on their laptop ;) Is it just my luck or, does coffee at conferences always taste crappy? The only decent coffee I had at a conference so far was at the EP.

hook out → feeding the dog, making tea, hopefully I'll be able to fight the backlog today

Related Posts