courtyard

Professor Katherine Isbister at the University of California Santa Cruz’s Computational Media department is looking for new graduate students (Masters and Ph.D.) to join her group for Fall 2017.

Interested students should apply to the program using the university’s website (see below) and also send an email to [email protected] to indicate interest. Helpful information when sending an email of interest includes:

-a description of research interests and skillset(s) related to developing and studying tangibles and wearables for play
-a curriculum vitae
-a copy of your unofficial transcript

Social Emotional Technology Lab
The mission of the newly founded Social Emotional Technology Lab at UC Santa Cruz is building and studying human computer interaction technologies that enhance social and emotional experience. Our work takes place at the intersection of games and HCI research and practice. A partial list of projects can be found here: http://www.katherineinterface.com/page3/page3.html

Focal Projects
New graduate researchers are particularly sought to take part in research and development of a) playful tangible technology for self regulation (‘Fidget Widgets’) and b) wearables to support collocated play. Recent publications from both projects are available upon request.

Relevant Skillsets
Programming and hardware prototyping
Experience designing and implementing games and playful systems
User research

Computational Media at UC Santa Cruz
Computational Media is all around us — video games, social media, interactive narrative, smartphone apps, computer-generated films, personalized health coaching, and more. To create these kinds of media, to deeply understand them, to push them forward in novel directions, requires a new kind of interdisciplinary thinker and maker. The new graduate degrees in Computational Media at UC Santa Cruz are designed with this person in mind.

Application Process
Applications for the new programs opened October 1st and close January 3rd. The GRE general test is required. For more information, please visit:

Academics HCI

deadline-extended

Usable Security (USEC) Workshop 2017

Abstract Submission Deadline: 7 December 2016
Full Paper Submission Deadline: 14 December 2016

Notification: 21 January 2017
Camera ready copy due: 31 January 2017
Workshop: 26 February 2017 (co-located with NDSS 2017) at the Catamaran Resort Hotel & Spa in San Diego, CA

Conference Website: http://www.dcs.gla.ac.uk/~karen/usec/

Submission Instructions: http://www.dcs.gla.ac.uk/~karen/usec/submission.html

One cannot have security and privacy without considering both the technical and human aspects thereof. If the user is not given due consideration in the development process, the system is likely to enable users to protect their privacy and security in the Internet.

Usable security and security is more complicated than traditional usability. This is because traditional usability principles cannot always be applied. For example, one of the cornerstones of usability is that people are given feedback on their actions, and are helped to recover from errors. In authentication, we obfuscate password entry (a usability fail) and we give people no assistance to recover from errors. Moreover, security is often not related to the actual functionality of the system, so people often see it as a bolt-on, and an annoying hurdle. These and other usability challenges of security are the focus of this workshop.

We invite submissions on all aspects of human factors including mental models, adoption, and usability in the context of security and privacy. USEC 2017 aims to bring together researchers already engaged in this interdisciplinary effort with other computer science researchers in areas such as visualization, artificial intelligence, machine learning and theoretical computer science as well as researchers from other domains such as economics, legal scientists, social scientists, and psychology. We particularly encourage collaborative research from authors in multiple disciplines.

Topics include, but are not limited to:

  • Human factors related to the deployment of the Internet of Things (New topic for 2017)
  • Usable security / privacy evaluation of existing and/or proposed solutions
  • Mental models that contribute to, or complicate, security or privacy
  • Lessons learned from designing, deploying, managing or evaluating security and privacy technologies
  • Foundations of usable security and privacy incl. usable security and privacy patterns
  • Ethical, psychological, sociological, economic, and legal aspects of security and privacy technologies

We also encourage submissions that contribute to the research community’s knowledge base:

  • Reports of replicating previously published studies and experiments
  • Reports of failed usable security studies or experiments, with the focus on the lessons learned from such experience

Academics Security

arxan-connected-cars

I like this infographic (click above to expand image) though with due respect to the authors, I’m skeptical about the claim that ‘connected cars’ (as if there’s only one thing called a connected car) have 10 times the amount of code in a Boeing 787. But I’m nitpicking. I appreciate that this graphic specifically calls out the OBD-II port as a worry spot as well as noting that insurance dongles lack security. It would be great to do security analysis on all existing dongles in significant circulation to see how bad things really are. I also quite liked this: “LTE coverage and Wifi in the car expose you to the same vulnerabilities as a house on wheels.” That’s simple and effective writing – bravo Arxan.

The Recommendations at the bottom are aimed at consumers. They’re all reasonable and this is the first time I’m seeing “Don’t jailbreak your car.” Again, good on you, Arxan. I’m amused by the suggestion to check your outlets periodically and make sure you know what’s installed. It’s like a combination of encouraging safe sex for your car combined with ‘watch out for spoofed ATMs.’

Arxan is, however, a B2B company, so I would like to see, in addition to consumer recommendations, industry recommendations. Of course, those suggestions are part the services they offer so they can’t give away too much for free, but still – a few pearls of wisdom would be welcome. I know it’s too much to ask for policy-oriented suggestions – especially ones that raise costs – so here are a few:

  • Security Impact Analysis should be a regulatory requirement for all cars that rise above a certain threshold of connectivity (a topic for exploration)

  • Strengthen data breach notification laws (a general suggestion, not just for cars or IoT)

  • Car companies should be required to have CISOs

Data Ownership Data Protection Policy Security User Control

IoT Privacy Forum founder Gilad Rosner features in the latest episode of O’Reilly’s Hardware Podcast, discussing his research for the British government, and the differences between European and US privacy cultures.

On his work used by the UK Parliament (paraphrased in parts):

That research was when the UK government put out a call and said, we’d like to vacuum up a lot of social media data and analyze it for government purposes: “beneficial outcomes” rather than law enforcement. Trying to look at data and come up with new information that would theoretically be beneficial to society. They were wondering how they’d go about it — whether “public” social media posts could present ethical problems when inhaling all that data for analysis. The answer is: yes, there are ethical problems here, because even though information is set to “public”, there’s a concept of respecting the context in which the data was uploaded, or volunteered. When I tweet, I’m not necessarily expecting the government to mine that information about me.

When it comes to privacy and data protection, especially with a large actor like the government, one of the most important concerns is procedural safeguards. Governments have ideas all the time, often good ideas, but the apparatus of implementing these ideas is very large, bureaucratic, and diffuse. So what constrains these activities, to make sure they’re being done securely, in line with existing privacy regulations, and with people being sensitive to things not necessarily covered by regulation, but still potentially worrisome? How do we come up with ways of letting good ideas happen, but under control?

Academics Data Protection Law Policy Power Transparency