Home / Tag Archives: natasha-lomas

Tag Archives: natasha-lomas

Europe takes another step towards copyright pre-filters for user generated content

In a key vote this morning the European Parliament’s legal affairs committee has backed the two most controversial elements of a digital  copyright reform package  — which critics warn could have a chilling effect on Internet norms like memes and also damage freedom of expression online. In the draft copyright directive , Article 11; “Protection of press publications concerning online uses” — which targets news aggregator business models by setting out a neighboring right for snippets of journalistic content that requires a license from the publisher to use this type of content (aka ‘the link tax’, as critics dub it) — was adopted by a 13:12 majority of the legal committee. While, Article 13; “Use of protected content by online content sharing service providers”, which makes platforms directly liable for copyright infringements by their users — thereby pushing them towards creating filters that monitor all content uploads with all the associated potential chilling effects (aka ‘censorship machines’) — was adopted by a 15:10 majority. MEPs critical of the proposals have vowed to continue to oppose the measures, and the EU parliament will eventually need to vote as a whole. #Article13 , the #CensorshipMachines , has been adopted by @EP_Legal with a 15:10 majority. Again: We will take this fight to plenary and still hope to #SaveYourInternet pic.twitter.com/BLguxmHCWs — Julia Reda (@Senficon) June 20, 2018 EU Member State representatives in the EU Council will also need to vote on the reforms before the directive can become law. Though, as it stands, a majority of European governments appear to back the proposals . European digital rights group EDRi, a long-standing critic of Article 13, has a breakdown of the next steps for the copyright directive here. It’s possible there could be another key vote in the parliament next month — ahead of negotiations with the European Council, which could be finished by fall. A final vote on a legally checked text will take place in the parliament — perhaps before the end of the year. Derailing the proposals now essentially rests on whether enough MEPs can be convinced it’s politically expedient to do so — factoring in a timeline that includes the next EU parliament elections, in May 2019. We can still turn this around! The #linktax and #uploadfilters passed a critical hurdle today. But in just 2 weeks, all 751 MEPs will be asked to take a stand either for or against a free & open internet. The people of Europe managed to stop ACTA, we can #SaveYourInternet again! pic.twitter.com/883ID7CKDE — Julia Reda (@Senficon) June 20, 2018 Last week, a coalition of original Internet architects, computer scientists, academics and supporters — including Sir Tim Berners-Lee, Vint Cerf, Bruce Schneier, Jimmy Wales and Mitch Kapor — penned an open letter  to the European Parliament’s president to oppose Article 13, warning that while “well-intended” the requirement that Internet platforms perform automatic filtering of all content uploaded by users “takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users”.

Read More »

Blockchain browser Brave starts opt-in testing of on-device ad targeting

Brave , an  ad-blocking web browser with a blockchain-based twist , has started trials of ads that reward viewers for watching them — the next step in its ambitious push towards a consent-based, pro-privacy overhaul of online advertising. Brave’s Basic Attention Token (BAT) is the underlying micropayments mechanism it’s using to fuel the model. The startup was founded in 2015 by former Mozilla CEO Brendan Eich, and  had a hugely successful initial coin offering  last year. In a blog post announcing the opt-in trial yesterday, Brave says it’s started “voluntary testing” of the ad model before it scales up to additional user trials. These first tests involve around 250 “pre-packaged ads” being shown to trial volunteers via a dedicated version of the Brave browser that’s both loaded with the ads and capable of tracking users’ browsing behavior. The startup signed up Dow Jones Media Group as a partner for the trial-based ad content back in April . People interested in joining these trials are being asked to contact its Early Access group — via  community.brave.com . Brave says the test is intended to analyze user interactions to generate test data for training its on-device machine learning algorithms. So while its ultimate goal for the BAT platform is to be able to deliver ads without eroding individual users’ privacy via this kind of invasive tracking, the test phase does involve “a detailed log” of browsing activity being sent to it. Though Brave also specifies: “Brave will not share this information, and users can leave this test at any time by switching off this feature or using a regular version of Brave (which never logs user browsing data to any server).” “Once we’re satisfied with the performance of the ad system, Brave ads will be shown directly in the browser in a private channel to users who consent to see them. When the Brave ad system becomes widely available, users will receive 70% of the gross ad revenue, while preserving their privacy,” it adds. The key privacy-by-design shift Brave is working towards is moving ad targeting from a cloud-based ad exchange to the local device where users can control their own interactions with marketing content, and don’t have to give up personal data to a chain of opaque third parties (armed with hooks and data-sucking pipes) in order to do so. Local device ad targeting will work by Brave pushing out ad catalogs (one per region and natural language) to available devices on a recurring basis. “Downloading a catalog does not identify any user,” it writes. “As the user browses, Brave locally matches the best available ad from the catalog to display that ad at the appropriate time. Brave ads are opt-in and consent-based (disabled by default), and engineered to operate without leaking the user’s personal data from their device.” It couches this approach as “a more efficient and direct opportunity to access user attention without the inherent liabilities and risks involved with large scale user data collection”

Read More »

Audit of NHS Trust’s app project with DeepMind raises more questions than it answers

A third party audit of a controversial patient data-sharing arrangement between a London NHS Trust and Google DeepMind appears to have skirted over the core issues that generated the controversy in the first place. The audit ( full report here ) — conducted by law firm Linklaters — of the Royal Free NHS Foundation Trust’s acute kidney injury detection app system, Streams, which was co-developed with Google-DeepMind (using an existing NHS algorithm for early detection of the condition), does not examine the problematic 2015 information-sharing agreement inked between the pair which allowed data to start flowing. “This Report contains an assessment of the data protection and confidentiality issues associated with the data protection arrangements between the Royal Free and DeepMind . It is limited to the current use of Streams, and any further development, functional testing or clinical testing, that is either planned or in progress. It is not a historical review,” writes Linklaters, adding that: “It includes consideration as to whether the transparency, fair processing, proportionality and information sharing concerns outlined in the Undertakings are being met.” Yet it was the original 2015 contract that triggered the controversy, after it was obtained and published by New Scientist, with the wide-ranging document  r aising questions over the broad scope of the data transfer ; the legal bases for patients information to be shared; and leading to questions over whether regulatory processes intended to safeguard patients and patient data had been sidelined  by the two main parties involved in the project. In  November 2016  the pair scrapped and replaced the initial five-year contract with a different one — which put in place additional information governance steps. They also went on to roll out the Streams app for use on patients in multiple NHS hospitals  — despite the UK’s data protection regulator, the ICO, having instigated an investigation into the original data-sharing arrangement. And just over a year ago  the ICO concluded that the Royal Free NHS Foundation Trust had failed to comply with Data Protection Law in its dealings with Google’s DeepMind. The audit of the Streams project was a requirement of the ICO. Though, notably, the regulator has not endorsed Linklaters report. On the contrary, it warns that it’s seeking legal advice and could take further action. In a statement  on its website, the ICO’s deputy commissioner for policy, Steve Wood, writes: “We cannot endorse a report from a third party audit but we have provided feedback to the Royal Free. We also reserve our position in relation to their position on medical confidentiality and the equitable duty of confidence. We are seeking legal advice on this issue and may require further action.” In a section of the report listing exclusions, Linklaters confirms the audit does not consider: “The data protection and confidentiality issues associated with the processing of personal data about the clinicians at the Royal Free using the Streams App.” So essentially the core controversy, related to the legal basis for the Royal Free to pass personally identifiable information on 1.6M patients to DeepMind when the app was being developed, and without people’s knowledge or consent, is going unaddressed here.

Read More »

Adblock Plus wants to use blockchain to call out fake news

eyeo, the company behind the popular browser-based ad block product Adblock Plus, is no stranger to controversy. Which is just as well given its new “passion project”: A browser add-on that labels news content as ‘trusted’ or, well, Breitbart. The beta browser extension, which is called Trusted News  (initially it’s just available for Chrome), is intended to help Internet users spot sources of fake news when they’re exposed to content online. And thus to help people avoid falling for scams or down into political sinkholes — at least without being aware of their inherent bias. The system, which is currently only available for English language content, “d emocratically scores the integrity and trustworthiness of online news sources”, as eyeo puts it. After being added to Chrome, the browser extension displays a small green check mark against its icon if a news source is deemed to be trustworthy. Or you might see an orange colored ‘B’ — denoting ‘bias’ — as in the below example, for the ‘alt right’ news website Breitbart… The extension can also deploy flags for untrustworthy, satire (denoted with a little blue smilie), clickbait, user-generated content, malicious or unknown — the latter if the site hasn’t yet been classified. It’s not clear how many sites have been classified via the system at this stage. So how is Trusted News classifying sites? In the first instance eyeo says it’s leaning on four third party fact-checking organizations to generate its classifications: PolitiFact , Snopes , Wikipedia and Zimdars’ List . “For now the way that it works is that you have these sources… and what they will do is essentially give their rating on a particular site and then, basically, if everything isn’t all the same — which they usually are — then you would just go by the majority,” explains Ben Williams, the company’s director of ecosystems

Read More »

Spanish soccer app caught using microphone and GPS to snoop

If you’ve ever found yourself wondering why an app is requesting microphone access when there doesn’t seem to be any logical reason why it should need to snoop on the sounds from your surroundings, hold that thought — and take a closer look at the T&Cs. Because it might turn out that spying is exactly what the app makers have in mind. To wit: La Liga, an app for fans of Spanish soccer which has been discovered using microphone access combined with the precise GPS location of Android users to listen in on people’s surroundings during match times — in a bid to catch bars that might not have a license to broadcast the match being watched.  As surveillance capitalism goes, it’s a fiendishly creative repurposing of your users as, well, unwitting volunteer spies and snitches. It’s also of course terrible human behavior. Behavior that has now garnered La Liga a bunch of one-star reviews for the Android app — along the lines of “this app converts you into a police whistler without you noticing!” and “it spies on you via the microphone and GPS. Rubbish. Don’t install”. The snitch feature appears to have surfaced largely as a result of the European Union’s new data protection framework, GDPR — which requires app makers to explain more precisely what exactly they’re doing with people’s data. Ergo, La Ligo users started noticing what the app wanted to do and discussing and denouncing it on social media, where it blew up into a trending topic, as El Pais  reports. In a statement  on its website responding to the snitch scandal, the league defends its actions writing that it has “a responsibility to protect the clubs and their fans” from unlicensed broadcasts being made in public places, claiming that such activity results in the loss of an estimated €150M annually from the league. It also specifies that the feature is only deployed in its Android app — and claims it has apparently only been active since June 8. It also says it’s only used within Spain

Read More »

Uber is bringing its Jump e-bikes to Europe

Dockless bike sharing startups — such as Ofo, Mobike and LimeBike — have flooded European cities with rides that can be hired at the tap of an app in recent years. But fierce competition in the urban mobility space is not deterring Uber from peddling into the region, and attempting to put some shine back on a brand that’s still divisive — charged with all sorts of problematic effects from rising congestion and air pollution to having a damaging impact on workers’ rights. It’s certainly true that the hangover from Uber’s legacy operational style of brash expansionism and thumbing its nose at regulators continues to cause the company problems in Europe. Many cities have banned its p2p service, and last year — in a major upset — London’s transport regulator withdrew its license to operate. Though under new CEO Dara Khosrowshahi Uber has also been expanding in some European markets — where regulatory requirements allow. Uber’s new chief executive has taken a strikingly different tone vs founder Travis Kalanick, saying he wants to work with cities and local authorities, rather than fight them. Today at the NOAH conference in Berlin that emollient tone was on show again, with Khosrowshahi announcing that Uber’s Jump electric bike sharing service will launch in the city this summer. “Here in Germany, I am determined to have a better dialogue with cities and various German stakeholders to discuss how we can shape the future of urban mobility together. Uber stands ready to help address some of the biggest challenges facing German cities: tackling air pollution, reducing congestion and increasing access to cleaner transportation solutions,” he said. Other unnamed European cities are also slated to launch in the coming months. And bikes can’t be accused of exacerbating air pollution or road-based congestion. Khosrowshahi also said Uber will also launch its all electric vehicle Uber Green service in Berlin by the end of the year, following a recent launch in Munch — saying that was Uber “playing our part in tackling air pollution”. “I’m thrilled to announce two new products for Berlin that are an important first step in developing our long term partnership with Germany — our Jump pedal-assist electric bikes and the introduction of a fully electric Uber Green service,” he added

Read More »

Europe to cap intra-EU call fees as part of overhaul to telecoms rules

European Union institutions have reached a political agreement over an update to the bloc’s telecoms rules that’s rattled the cages of incumbent telcos. Agreement was secured late yesterday after months of negotiations between the EU parliament and Council, with the former pushing for and securing a price cap on international calls within the bloc — of no more than 19 cents per minute. Texts will also be capped at a maximum of 6 cents each, Reuters reports. While roaming charges for EU travelers were abolished across the bloc last summer , the parliament was concerned that charges for calls and texts between EU Member States is often disproportionately high — hence pushing for the cap, which was not in the original EC proposal. The Commission proposed a new European Electronic Communications Code  back in 2016, to modernize telecoms rules that had stood since 2009 — to take account of technology and market shifts, and align the rules with its wider Digital Single Market strategy. The proposal broadly focused on pushing for consistency in spectrum policy and management; reducing regulatory fragmentation; ensuring a level playing field for market players and protections for consumers; and incentivizing investment in high-speed broadband networks. And on the incentivization front, the new rules agreed yesterday update the powers of national regulators to act against dominant players — such as by being able to impose access to their network. For a case study on why such interventions might be necessary you could look at the fiber investment and network-access foot-dragging of a former incumbent telco such as BT in the UK, for example, which has long favored eking out copper. While its network infrastructure division OpenReach was last year ordered to be legally separated  — around a decade after it was functionally separated by the regulator. Yet complaints over BT’s lack of investment in broadband infrastructure and access for rivals to its networks have, nonetheless, persisted. On the consumer front, the new EU telecoms Code also includes measures intended to make it easier to change service provider and keep the same phone number; measures around tariff transparency to make it easier for people to compare contractual offers, and the ability to terminate a contract without incurring additional costs; as well as additional protections around bundled services. For operators there are deregulation measures for co-investments — intended to promote “risk sharing in the deployment of very high capacity networks”. And the Code sets wireless spectrum licenses at at least 20 years — also intended to give carriers the “predictability” they need to speed up 5G and fiber deployments. Though this is shorter than operators had hoped, and the European Telecommunications Network Operators’ Association (ETNO) — whose membership is made up of incumbent telcos such as BT — has been quick to voice its displeasure, describing the code as a “ missed opportunity “, and complaining that it adds extra complexity while also failing to incentivize investment. “The Code will not ignite the much needed rush to invest in 5G and fibre networks and it will add complexity to an already burdensome system,” it writes

Read More »

Apple got even tougher on ad trackers at WWDC

Apple unveiled a handful of pro-privacy enhancements for its Safari web browser at its annual developer event yesterday, building on an ad tracker blocker it announced at WWDC a year ago. The feature — which Apple dubbed ‘Intelligent Tracking Prevention’ (IPT) — places restrictions on cookies based on how frequently a user interacts with the website that dropped them. After 30 days of a site not being visited Safari purges the cookies entirely. Since debuting IPT a major data misuse scandal has engulfed Facebook , and consumer awareness about how social platforms and data brokers track them around the web and erode their privacy by building detailed profiles to target them with ads has likely never been higher. Apple was ahead of the pack on this issue and is now nicely positioned to surf a rising wave of concern about how web infrastructure watches what users are doing by getting even tougher on trackers. Cupertino’s business model also of course aligns with privacy, given the company’s main money spinner is device sales. And features intended to help safeguard users’ data remain one of the clearest and most compelling points of differentiation vs rival devices running Google’s Android OS, for example. “Safari works really hard to protect your privacy and this year it’s working even harder,” said Craig Federighi, Apple’s SVP of software engineering during yesterday’s keynote. He then took direct aim at social media giant Facebook — highlighting how social plugins such as Like buttons, and comment fields which use a Facebook login, form a core part of the tracking infrastructure that follows people as they browse across the web. In April US lawmakers also closely questioned Facebook’s CEO Mark Zuckerberg about the information the company gleans on users via their offsite web browsing, gathered via its tracking cookies and pixels — receiving only evasive answers in return.

Read More »

Brexit blow for UK’s hopes of helping set AI rules in Europe

The UK’s hopes of retaining an influential role for its data protection agency in shaping European Union regulations post-Brexit — including helping to set any new Europe-wide rules around artificial intelligence — look well and truly dashed. In a speech at the weekend in front of the International Federation for European Law, the EU’s chief Brexit negotiator, Michel Barnier, shot down the notion of anything other than a so-called ‘adequacy decision’ being on the table for the UK after it exits the bloc. If granted, an adequacy decision is an EU mechanism for enabling citizens’ personal data to more easily flow from the bloc to third countries — as the UK will be after Brexit. Such decisions are only granted by the European Commission after a review of a third country’s privacy standards that’s intended to determine that they offer essentially equivalent protections as EU rules. But the mechanism does not allow for the third country to be involved, in any shape or form, in discussions around forming and shaping the EU’s rules themselves. So, in the UK’s case, the country would be going from having a seat at the rule-making table to being shut out of the process entirely — at time when the EU is really setting the global agenda on digital regulations. “The United Kingdom decided to leave our harmonised system of decision-making and enforcement. It must respect the fact that the European Union will continue to work on the basis of this system, which has allowed us to build a single market, and which allows us to deepen our single market in response to new challenges,” said Barnier in Lisbon on Saturday. “And, as indicated in the European Council guidelines, the UK must understand that the only possibility for the EU to protect personal data is through an adequacy decision. It is one thing to be inside the Union, and another to be outside.” “Brexit is not, and never will be, in the interest of EU businesses,” he added. “And it will especially run counter to the interests of our businesses if we abandon our decision-making autonomy. This autonomy allows us to set standards for the whole of the EU, but also to see these standards being replicated around the world. This is the normative power of the Union, or what is often called ‘the Brussels effect’. “And we cannot, and will not, share this decision-making autonomy with a third country, including a former Member State who does not want to be part of the same legal ecosystem as us.” Earlier this month  the UK’s Information Commissioner, Elizabeth Denham, told MPs on the UK parliament’s committee for exiting the European Union that a bespoke data agreement that gave the ICO a continued role after Brexit would be a far superior option to an adequacy agreement — pointing out that the UK stands to lose influence at a time when the EU is setting global privacy standards via the General Data Protection Regulation (GDPR), which came into full force last Friday

Read More »

Uber ends policy of forced arbitration for individual sexual assault claims

In a major policy change for its US operations, Uber has announced it’s ending mandatary arbitration for individual claims of sexual assault or sexual harassment by Uber drivers, riders or employees. It is also ending the requirement that victims sign a confidentiality provision preventing them from speaking about the sexual assault or sexual harassment they suffered — saying survivors will now have the option to settle their claims with Uber without having to agree to being publicly silenced in order to do so. Last month a group of women alleging sexual violence from Uber drivers sent an open letter to the company’s board asking to be released from the mandatory arbitration clause in the Uber app’s terms of service. Former Uber engineer Susan Fowler — who was instrumental in highlighting internal problems with sexual harassment and sexism at Uber when she blogged about her experiences at the company last year  — also urged CEO Dara Khosrowshahi to end the policy. And in a  Twitter exchange  in March Khosrowshahi signaled he was willing to consider ending forced arbitration. “I will take it seriously, but we have to take all of our constituents into consideration,” he wrote to Fowler then. Concerns about safety and Uber’s attitude to reporting serious crimes were also among the reasons identified by London’s transport regulator for withdrawing Uber’s license to operate in the UK capital  last September . Although safety transparency measures also being announced by Uber today appear limited to the US market for now. Uber says it will be publishing what it describes as a “safety transparency report” — which it says will include data on sexual assaults and “other incidents” that occur as a result of activity on its platform. Announcing the moves in a blog post  today, entitled ‘Turning the lights on’, Uber’s chief legal officer Tony West writes that the company has committed to doing “the right thing” under its new CEO — a new attitude which requires “three key elements: transparency, integrity, and accountability”. Describing sexual violence as “a huge problem globally”, he continues: “The last 18 months have exposed a  silent epidemic  of sexual assault and harassment that haunts every industry and every community. Uber is not immune to this deeply rooted problem, and we believe that it is up to us to be a big part of the solution.” Commenting on Uber’s policy changes to end mandatory arbitration, Jeanne Christensen, a partner at New York based law firm Wigdor LLP, which filed a class action lawsuit against Uber last year on behalf of women who said they were assaulted or raped by Uber drivers, described it as a critical step to “reduce future suffering by women passengers”. But she also flagged Uber’s decision to not end forced arbitration for groups of victims acting on a class basis — saying this shows the company is “not fully committed to meaningful change”

Read More »

Facebook suspends ~200 suspicious apps out of “thousands” reviewed so far

Did you just notice a Facebook app has gone AWOL? After reviewing “thousands” of apps on its platform following a major data misuse scandal that blew up in March, Facebook has announced it’s suspended around 200 apps — pending what it describes as a “thorough investigation” into whether or not their developers misused Facebook user data. The action is part of a still ongoing audit of third party applications running on the platform announced by Facebook  in the wake of  the Cambridge Analytica data misuse scandal where a third party developer used quiz apps to extract and pass Facebook user data to the consultancy for political ad targeting purposes. CEO Mark Zuckerberg announced the app audit on March 21 , writing that the company would “investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity”. Apps that would not agree to a “thorough audit” would also be banned, he said then. Just under two months on and the tally is ~200 ‘suspicious’ app suspensions, though the review process is ongoing — and Facebook is not being more specific about the total number of apps it’s looked at so far (beyond saying “thousands”) — so expect that figure to rise. In the Cambridge Analytica instance, Facebook admitted that personal information on as many as 87 million users may have been passed to the political consultancy — without most people’s knowledge or consent. Giving an update on the app audit process in a  blog post , Ime Archibong ,  Facebook’s   VP of product partnerships, writes that the investigation is “in full swing”. “We have large teams of internal and external experts working hard to investigate these apps as quickly as possible,” he says. “To date thousands of apps have been investigated and around 200 have been suspended — pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website . It will show people if they or their friends installed an app that misused data before 2015 — just as we did for Cambridge Analytica .” Archibong does not confirm how much longer the audit will take — but does admit there’s a long way to go, writing that: “There is a lot more work to be done to find all the apps that may have misused people’s Facebook data – and it will take time.” “We are investing heavily to make sure this investigation is as thorough and timely as possible,” he adds. Where Facebook does have concerns about an app — such as the ~200 apps it has suspended pending a fuller probe — Archibong says it will conduct interviews; make requests for information (“which ask a series of detailed questions about the app and the data it has access to”); and perform audits “that may include on-site inspections”. So Facebook will not be doing on site inspections in every suspicious app instance.

Read More »

Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woos from a clappy, home-turf developer crowd at its I/O conference in Mountain View this week with a demo of an in-the-works voice assistant feature that will enable the AI to  make telephone calls on behalf of its human owner . The so-called ‘Duplex’ feature of the Google Assistant was shown calling a hair salon to book a woman’s hair cut, and ringing a restaurant to try to book a table — only to be told it did not accept bookings for less than five people. At which point the AI changed tack and asked about wait times, earning its owner and controller, Google, the reassuring intel that there wouldn’t be a long wait at the elected time. Job done. The voice system deployed human-sounding vocal cues, such as ‘ums’ and ‘ahs’ — to make the “ conversational experience more comfortable “, as Google couches it in a blog about its intentions for the tech. The voices Google used for the AI in the demos were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased. Indeed, the AI pantomime was apparently realistic enough to convince some of the genuine humans on the other end of the line that they were speaking to people. At one point the bot’s ‘mm-hmm’ response even drew appreciative laughs from a techie audience that clearly felt in on the ‘joke’. But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration. One it does not allow to trouble the trajectory of its engineering ingenuity. A consideration which only seems to get a look in years into the AI dev process, at the cusp of a real-world rollout — which Pichai said would be coming shortly. Deception by design “Google’s experiments do appear to have been designed to deceive,” agreed Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab , discussing the Duplex demo. “Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding naturally. And if they had instead tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.

Read More »

Facebook stops accepting foreign-funded ads about Ireland’s abortion vote

Facebook has announced it has stopped accepting ads paid for by foreign entities that are related to a referendum vote in Ireland later this month, saying it’s acting to try to prevent outsiders from attempting to skew the vote. The referendum will decide whether to repeal or retain Ireland’s constitutional ban on abortion. “Concerns have been raised about organisations and individuals based outside of Ireland trying to influence the outcome of the referendum on the Eighth Amendment to the Constitution of Ireland by buying ads on Facebook. This is an issue we have been thinking about for some time,” the company  writes today on its Dublin blog. “Today, as part of our efforts to help protect the integrity of elections and referendums from undue influence, we will begin rejecting ads related to the referendum if they are being run by advertisers based outside of Ireland.” Facebook says it’s stopping foreign-funded ads because additional ad transparency and election integrity tools it has in the works — and is intending to roll out more widely, across its platform — will not be ready in time for Ireland’s Eighth Amendment vote, which will take place on May 25. “What we are now doing for the referendum on the Eighth Amendment will allow us to operate as though these tools, which are not yet fully available, were in place today with respect to foreign referendum-related advertising. We feel the spirit of this approach is also consistent with the Irish electoral law that prohibits campaigns from accepting foreign donations,” Facebook writes. “This change will apply to ads we determine to be coming from foreign entities which are attempting to influence the outcome of the vote on May 25. We do not intend to block campaigns and advocacy organisations in Ireland from using service providers outside of Ireland,” it adds. The social media’s ad platform has been under increasing political scrutiny since revelations emerged about the extent of Kremlin-backed disinformation campaigns during the 2016 US presidential election. And last year  Facebook admitted Kremlin-backed content — including, but not limited to, Facebook ads — may have reached as many as 126 million people during the election period. Concerns have also been raised about the role of its platform during the UK’s 2016 referendum on EU membership — with an investigation into social media and campaign spending ongoing by the UK’s Electoral Commission, and another — by the UK’s data watchdog, the ICO — also  looking more broadly at the use of data analytics for political purposes. At the same time, a  major Facebook data privacy scandal that erupted in March, after fresh details were published about the use of user data by a controversial political consultancy called Cambridge Analytica, has further dialed up the pressure on the company as lawmakers have turned their attention to the messy intersections of social media and politics. Of course Facebook is by no means the only place online where all sorts of foreign agents have been caught seeking to influence opinions . But the Cambridge Analytica scandal has illustrated the powerful lure of the platform’s reach (and data holdings), as well as underlining how lax Facebook has historically been in controlling the messages people are paying it to target at its users

Read More »

Facebook is still falling short on privacy, says German minister

Germany’s justice minister has written to Facebook calling for the platform to implement an internal “control and sanction mechanism” to ensure third-party developers and other external providers are not able to misuse Facebook data — calling for it to both monitor third party compliance with its platform policies and apply “harsh penalties” for any violations. The letter, which  has been published in full in local media ,  follows the privacy storm that has engulfed the company since mid March when fresh revelations were published by the Observer of London and the New York Times — detailing how Cambridge Analytica had obtained and used personal information on up to 87 million Facebook users for political ad targeting purposes. Writing to Facebook’s founder and CEO Mark Zuckerberg, justice minister Katarina Barley welcomes some recent changes the company has made around user privacy, describing its decision to limit collaboration with “data dealers” as “a good start”, for example. However she says the company needs to do more — setting out a series of what she describes as “core requirements” in the area of data and consumer protection (bulleted below).  She also writes that the Cambridge Analytica scandal confirms long-standing criticisms  against Facebook made by data and consumer advocates in Germany and Europe, adding that it suggests various lawsuits filed against the company’s data practices have “good cause”. “ Unfortunately, Facebook has not responded to this criticism in all the years or only insufficiently,” she continues (translated via Google Translate ).  “ Facebook has rather expanded its data collection and use.   This is at the expense of the privacy and self-determination of its users and third parties.” “What is needed is that Facebook lives up to its corporate responsibility and makes a serious change,” she says at the end of the letter.  “ In interviews and advertisements, you have stated that the new EU data protection regulations are the standard worldwide for the social network.   Whether Facebook consistently implements this view, unfortunately, seems questionable,” she continues, critically flagging Facebook’s decision to switch the data controller status of ~1.5BN international users this month so they will no longer be under the jurisdiction of EU law, before adding: “ I will therefore keep a close eye on the further measures taken by Facebook. “ Since revelations about Cambridge Analytica’s use of Facebook data snowballed into a global privacy scandal for the company this spring, the company has revealed a series of changes which it claims are intended to bolster data protection on its platform. Although, in truth, many of the tweaks Facebook has announced were likely in train already — as it has been working for months (if not years) on its response to the EU’s incoming GDPR framework, which will apply from May 25. Yet, even so, many of these measures have been roundly criticized by privacy experts , who argue they do not go far enough to comply with GDPR and will trigger legal challenges once the framework is being applied. For example, a new consent flow, announced by Facebook last month, has been accused of being intentionally manipulative  — and of going against the spirit of the new rules, at very least.

Read More »

Badoo adds Live Video chat to its dating apps

European dating giant Badoo has added a live video chat feature to its apps, giving users the chance to talk face-to-face with matches from the comfort of their own home — and even before agreeing to go out on a first date. It’s claiming it’s the first dating app service to add a live video feature, though clearly major players in the space were not holding back because of the complexity of the technical challenge involved. Rather live video in a dating app context raises some immediate risk flags, including around inappropriate behavior which could put off users. And for examples on that front you only need recall the kind of content that veteran Internet service  Chatroulette  was famed for serving straight up — if you were brave enough to play. (“I pressed ‘play’ last night at around 3:00 am PST and after about 45 clicks on ‘Next’ encountered 5 straight up penis shots,” began TechCrunch’s former co-editor Alexia Tsotsis’ 2010 account of testing the service — which deploys live video chat without any kind of contextual wrapper, dating or otherwise. Clearly Badoo will be hoping to achieve a much better ratio of quality conversation to animated phalli.) But even beyond the risk of moving dick pics, video chatting with strangers can just be straight up awkward for people to jump into — perhaps especially in a dating context, where singles are trying to make a good impression and won’t want to risk coming across badly if it means they lose out on a potential date. Sending an opening text to a dating match from a cold start can be tricky enough, without ramping up the pressure to impress by making ‘breaking the ice’ into a video call. So while dating apps have been playing around with video for a while now  it’s mostly been in the style of the Snapchat Stories format — letting users augment their profiles with a bit of richer media storytelling, without the content and confidence risks associated with unmoderated live video. Tinder also recently introduced a GIF-style video loops feature. And it’s a big step from curated and controlled video snippets to the freeform risk and rush of live video. Regardless, Badoo is diving in — so full marks for taking the plunge

Read More »