Skip to main content

Deep Learning and Adaptive Sharing for Online Social Networking

Prompted by Facebook Research's recent announcement on using deep learning to help users avoid 'drunk posting' embarrassing information on the social networking platform, I wrote an article for The Conversation about deep learning and adaptive sharing.  This draws on our research on Adaptive Sharing for Online Social Networks, which was recognised as the Best Paper at the  13th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom-14).  The following is a short excerpt from the article:
Facebook’s initial target appears aimed at extending its face recognition capability to automatically differentiate between a user’s face when sober and drunk, and use this to get a user to think twice before hitting the post button. Of course being detected as being drunk in photographs won’t be the only factor that determines when we want to moderate our social media sharing behaviours. The nature of the links we share, like and comment on can reveal a wealth of information about us, from ethnic and socio-economic background to political inclination and our sexuality. This makes the task for any artificial intelligence of managing our online privacy a challenging one.
A key challenge to help us manage our privacy more effectively will be to develop techniques that can analyse the data – photographs, their time and location, the people in them and how they appear, or the content of links – and correlate this to the privacy implications for the user given the privacy settings.
Our own research on adaptive sharing in social networks uses a quantitative model of privacy risk and social benefit to evaluate the effect of sharing any given piece of information with different members of the user’s social network. Then it can provide recommendations for audiences to share with, or avoid.
Like Facebook’s efforts, our work is to apply machine-learning techniques – which will one day include detecting drunkenness in photographs, or automatically determining the sensitivity of different information and calculating the potential regret factor of the post you’re about to make. Far from being a flippant or fanciful use of technology, these sorts of models will become a core part of the way we can engineer better privacy-awareness into the software we use.
The full version of the article has been republished by, and several other sites.  It can be read on The Conversation UK under the title 'Deep learning could prevent you from drunk posting to Facebook'.
Post a Comment

Popular posts from this blog

Visual programming for 'wiring' the Internet of Things

There is a proliferation of devices being developed to form the building blocks of the Internet of Things (IoT), from Internet-connected power sockets and light bulbs to kettles, toasters and washing machines. However, to realise the full potential of the IoT, it will be necessary to allow these devices to interconnect and share data with each other to deliver the functionalities required by end-users. In recent research on end-user programming for the IoT, my colleagues Pierre Akiki, Yijun Yu and myself have proposed the notion of Visual Simple Transformations (ViSiT), that provides a visual programming paradigm for users to wire together IoT devices. The video above shows a demonstration of the ViSiT solution and full details of the approach will appear in an upcoming special issue of the ACM Transactions on Computer-Human Interaction (ToCHI).

This work is highlighted in a recent IEEE Software Blog: Empowering Users to Build IoT Software with a Puzzle-like Environment and full deta…

Privacy-by-Design Framework for Internet of Things Systems

IOT-2016 7-9 September, 2016, Stuttgart, Germany from Charith Perera
Recent DDoS attacks on key internet services, like the attack that affected the Dyn domain name service, highlighted the security challenges associated with the proliferation of insecure Internet of Things (IoT) systems.  This attack exploited common vulnerabilities like the use of default administration passwords on IoT devices such as internet-enabled CCTV cameras, internet-enabled appliances and smart home devices, to recruit over hundreds of thousands of nodes into a botnet.   This capability highlights the cyber security threats associated with the IoT and brings into sharp relief the importance of considering both security and privacy when designing these systems.

In recent work, presented at the Internet of Things Conference, we describe a privacy-by-design framework for assessing the privacy capabilities of IoT applications and platforms.  Building on more general design strategies for privacy in informaiton …

Are we losing the Internet Security battle?

I was recently invited by Heimdal Security to take part in an expert roundup, with the theme of "Is Internet Security a Losing Battle?".  The main thrust of my answer was to question our use of analogies of conflict in the context of Internet Security or cyber security.  As I said in my response:
"... in this context the metaphors of conflict, such as ‘war’ and ‘battle’ are unhelpful because they suggest that internet security is the responsibility of the technologists who act our defensive force against attackers.   Instead, as has been argued by technology activists like Cory Doctorow and others we might have more success by thinking of cyber security using the analogy of public health and communicable diseases.   By using this analogy, we make cyber security issues more relevant to people and spur them to gain a better understanding that, like diseases, any of us can be afflicted by a cyber security attack.  We can also adopt an analogous approach for handling cyber …