Are Brands Protected In the Metaverse? Hermes and NFT Artist Spar In US Court

Slashdot - Your Rights Online - 4 godzin 53 min ago
An anonymous reader quotes a report from The Guardian: Pictures of 100 Birkin bags covered in shaggy, multi-colored fur have become the focus of a court dispute that will decide how digital artists can depict commercial activities in their art and cast new light on whether brands are protected in the metaverse. In the case, being heard this week in a New York federal courtroom, the luxury handbag maker Hermes is challenging an artist who sells the futuristic digital works known as NFTs or non-fungible tokens. Artist and entrepreneur Mason Rothschild created images of the astonishingly expensive Hermes handbag, the Birkin, digitally covered the bags in fur and turned the pictures into an "art project," which he called MetaBirkin. Then he sold editions of the images online for total earnings of more than $1m, according to court records. Hermes promptly sued, claiming the artist was simply "a digital speculator who is seeking to get rich quick by appropriating" the Hermes brand. The "Metabirkins brand simply rips off Hermes's famous Birkin trademark by adding the generic prefix "meta," read the original complaint filed by Hermes in January last year, noting that the "meta" in the name refers to the digital metaverse now being pumped by technology innovators as the next big thing in tech profit-making. Rothschild, whose real name is Sonny Estival, countered that he has a first amendment right to depict the hard-to-buy, French handbags in his artwork, just as Andy Warhol portrayed a giant Campbell's soup cans in his famous pop culture silk screens. "I'm not creating or selling fake Birkin bags. I'm creating art works that depict imaginary, fur-covered Birkin bags," said Rothschild in a letter to the community after the case was filed. "The fact that I sell the art using NFTs doesn't change the fact that it's art." "One hurdle that Hermes will have to overcome in the case is the fact that US trademark law requires brands to register their trademarks for each specific type of use, so digital sales might require a separate registration," notes the report. "In the end, [Michelle Cooke, a partner at the law firm Arentfox Schiff LLP, who advises brands on these types of trademark issues] says the decision might come down to whether the jury believes Rothschild did the MetaBirkin project as an artistic project 'or was it a money-making venture that he cast as an artistic project when he got into trouble.'"pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Few Americans Understand How Online Tracking Works, Finds Report

Slashdot - Your Rights Online - 14 godzin 23 min ago
An anonymous reader quotes a report from The New York Times: Many people in the United States would like to control the information that companies can learn about them online. Yet when presented with a series of true-or-false questions about how digital devices and services track users, most Americans struggled to answer them, according to a report published (PDF) on Tuesday by the Annenberg School for Communication at the University of Pennsylvania. The report analyzed the results of a data privacy survey that included more than 2,000 adults in the United States. Very few of the respondents said they trusted the way online services handled their personal data. The survey also tested people's knowledge about how apps, websites and digital devices may amass and disclose information about people's health, TV-viewing habits and doorbell camera videos. Although many understood how companies can track their emails and website visits, a majority seemed unaware that there are only limited federal protections for the kinds of personal data that online services can collect about consumers. Seventy-seven percent of the participants got nine or fewer of the 17 true-or-false questions right, amounting to an F grade, the report said. Only one person received an A grade, for correctly answering 16 of the questions. No one answered all of them correctly. Seventy-nine percent of survey respondents said they had "little control over what marketers" could learn about them online, while 73 percent said they did not have "the time to keep up with ways to control the information that companies" had about them. "The big takeaway here is that consent is broken, totally broken,"Joseph Turow, a media studies professor at the University of Pennsylvania who was the lead author of the report, said in an interview. "The overarching idea that consent, either implicit or explicit, is the solution to this sea of data gathering is totally misguided -- and that's the bottom line." The survey results challenge a data-for-services trade-off argument that the tech industry has long used to justify consumer tracking and to forestall government limits on it: Consumers may freely use a host of convenient digital tools -- as long as they agree to allow apps, sites, ad technology and marketing analytics firms to track their online activities and employ their personal information. But the new report suggests that many Americans aren't buying into the industry bargain. Sixty-eight percent of respondents said they didn't think it was fair that a store could monitor their online activity if they logged into the retailer's Wi-Fi. And 61 percent indicated they thought it was unacceptable for a store to use their personal information to improve the services they received from the store. Only a small minority -- 18 percent -- said they did not care what companies learned about them online. "When faced with technologies that are increasingly critical for navigating modern life, users often lack a real set of alternatives and cannot reasonably forgo using these tools," Lina M. Khan, the chair of the Federal Trade Commission, said in a speech (PDF) last year. In the talk, Ms. Khan proposed a "type of new paradigm" that could impose "substantive limits" on consumer tracking.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Maryland Motor Vehicles Agency Wants To Know About Your Sleep Apnea

Slashdot - Your Rights Online - 16 godzin 8 min ago
"Man goes to the doctor for a sleep apnea diagnosis, a few months later he gets a letter from the state of Maryland about his sleep apnea -- and they won't tell him how they found out about it," writes Slashdot reader schwit1. NBC4 Washington reports: Dr. David Allick, a dentist in Rockville, was diagnosed with mild sleep apnea in June 2022. Months later, he received a letter from the MVA requesting additional information about his diagnosis in order "to determine your fitness to drive." The September 2022 letter noted failure to return the required forms, which included a report from his physician, could result in the suspension of his license. Allick said he isn't clear how the state learned about his medical diagnosis. But more importantly, he said he was previously unaware of a little-known Maryland law requiring people to report their sleep apnea diagnosis to state driving authorities. Allick said he still has questions about what prompted the ordeal. "Everybody I talked to -- nobody's heard of anything like this," he said, also acknowledging: "I'm sure they want to keep the roads safe." schwit1 adds: "How is this not a HIPAA violation?" The investigation team at NBC4 Washington found that Allick is one of 1,310 people whose sleep apnea diagnoses "have led to medical reviews by the Maryland MVA." The state department didn't have data on how many of these Maryland drivers have had their license suspended.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Wyze Security Cameras Will Go Offline Tonight For Two Hours

Slashdot - Your Rights Online - 17 godzin 8 min ago
If you have Wyze cameras or a Wyze home security system, you will need to make other arrangements to monitor your property from 12AM PT to 2AM PT tomorrow morning. The Verge reports: The smart home company sent an email to its customers this week stating that system maintenance on February 8th at 12AM PT will impact every feature of the system that relies on the app or website. That includes being able to alert Noonlight, the professional monitoring company Wyze uses for its Sense security system, about a potential break-in. Not only will your security system be down, but if you use Wyze cameras to keep an eye on things going bump in the night, you'll have to stay awake. Wyze cameras won't be able to upload any video to the cloud or send alerts for motion or other events to the app. While it's a good thing that Wyze is giving customers a heads-up, the flip side is that everyone is getting a heads-up. It's posting a sign that any location using this equipment will be unprotected between these hours, with basically no notice to create a backup plan or take other precautions, depending on your security concerns. It's also worrisome that the professional security customers have paid for and rely on can be completely disabled for "maintenance."pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Ex-Coinbase Manager Pleads Guilty in Crypto-Related First Insider Trading Case

Slashdot - Your Rights Online - Wt, 2023-02-07 19:22
A former Coinbase product manager pleaded guilty on Tuesday in what U.S. prosecutors have called the first insider trading case involving cryptocurrency, his defense lawyer said in a court hearing. From a report: Ishan Wahi, 32, pleaded guilty to two counts of conspiracy to commit wire fraud, after initially pleading not guilty last year. Prosecutors said Wahi shared confidential information with his brother Nikhil and their friend Sameer Ramani about forthcoming announcements of new digital assets that Coinbase would let users trade. "I knew that Sameer Ramani and Nikhil Wahi would use that information to make trading decisions," Ishan Wahi said during Tuesday's hearing in federal court in Manhattan. "It was wrong to misappropriate and disseminate Coinbase's property." Nikhil Wahi and Ramani were charged with using ethereum blockchain wallets to acquire digital assets and trading at least 14 times before Coinbase announcements between June 2021 and April 2022.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

China's Top Android Phones Collect Way More Info

Slashdot - Your Rights Online - Wt, 2023-02-07 16:40
Artem S. Tashkinov writes: Don't buy an Android phone in China, boffins have warned, as they come crammed with preinstalled apps transmitting privacy-sensitive data to third-party domains without consent or notice. The research, conducted by Haoyu Liu (University of Edinburgh), Douglas Leith (Trinity College Dublin), and Paul Patras (University of Edinburgh), suggests that private information leakage poses a serious tracking risk to mobile phone customers in China, even when they travel abroad in countries with stronger privacy laws. In a paper titled "Android OS Privacy Under the Loupe: A Tale from the East," the trio of university boffins analyzed the Android system apps installed on the mobile handsets of three popular smartphone vendors in China: OnePlus, Xiaomi and Oppo Realme. The researchers looked specifically at the information transmitted by the operating system and system apps, in order to exclude user-installed software. They assume users have opted out of analytics and personalization, do not use any cloud storage or optional third-party services, and have not created an account on any platform run by the developer of the Android distribution. A sensible policy, but it doesn't seem to help much. Within this limited scope, the researchers found that Android handsets from the three named vendors "send a worrying amount of Personally Identifiable Information (PII) not only to the device vendor but also to service providers like Baidu and to Chinese mobile network operators."pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

First US Navy Pilot To Publicly Report UAPs Says 'Congress Must Reveal the Truth To the American People'

Slashdot - Your Rights Online - Wt, 2023-02-07 05:30
Ryan Graves, former Lt. U.S. Navy and F/A-18F pilot who was the first active-duty fighter pilot to come forward publicly about regular sightings of UAP, says more data is needed about unidentified anomalous phenomena (UAP). "We should encourage pilots and other witnesses to come forward and keep the pressure on Congress to prioritize UAP as a matter of national security," writes Graves in an opinion piece for The Hill. An anonymous Slashdot reader shares an excerpt from his report: As a former U.S. Navy F/A-18 fighter pilot who witnessed unidentified anomalous phenomena (UAP) on a regular basis, let me be clear. The U.S. government, former presidents, members of Congress of both political parties and directors of national intelligence are trying to tell the American public the same uncomfortable truth I shared: Objects demonstrating extreme capabilities routinely fly over our military facilities and training ranges. We don't know what they are, and we are unable to mitigate their presence. The Office of the Director of National Intelligence (ODNI) last week published its second ever report on UAP activity. While the unclassified version is brief, its findings are sobering. Over the past year, the government has collected hundreds of new reports of enigmatic objects from military pilots and sensor systems that cannot be identified and "represent a hazard to flight safety." The report also preserves last year's review of the 26-year reporting period that some UAP may represent advanced technology, noting "unusual flight characteristics or performance capabilities." Mysteriously, no UAP reports have been confirmed to be foreign so far. However, just this past week, a Chinese surveillance balloon shut down air traffic across the United States. How are we supposed to make sense of hundreds of reports of UAP that violate restricted airspace uncontested and interfere with both civilian and military pilots? Here is the hard truth. We don't know. UAP are a national security problem, and we urgently need more data. Why don't we have more data? Stigma. I know the fear of stigma is a major problem because I was the first active-duty fighter pilot to come forward publicly about regular sightings of UAP, and it was not easy. There has been little support or incentive for aircrew to speak publicly on this topic. There was no upside to reporting hard-to-explain sightings within the chain of command, let alone doing so publicly. For pilots to feel comfortable, it will require a culture shift inside organizations and in society at large. I have seen for myself on radar and talked with the pilots who have experienced near misses with mysterious objects off the Eastern Seaboard that have triggered unsafe evasive actions and mandatory safety reports. There were 50 or 60 people who flew with me in 2014-2015 and could tell you they saw UAP every day. Yet only one other pilot has confirmed this publicly. I spoke out publicly in 2019, at great risk personally and professionally, because nothing was being done. The ODNI report itself notes that concentrated efforts to reduce stigma have been a major reason for the increase in reports this year. To get the data and analyze it scientifically, we must uproot the lingering cultural stigma of tin foil hats and "UFOs" from the 1950s that stops pilots from reporting the phenomena and scientists from studying it. Last September, the U.S. Navy said that all of the government's UFO videos are classified information and releasing any additional UFO videos would "harm national security."pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'Congress+Must+Reveal+the+Truth+To+the+American+People'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Finland's Most-Wanted Hacker Nabbed In France

Slashdot - Your Rights Online - Wt, 2023-02-07 00:00
An anonymous reader quotes a report from KrebsOnSecurity: Julius "Zeekill" Kivimaki, a 25-year-old Finnish man charged with extorting a local online psychotherapy practice and leaking therapy notes for more than 22,000 patients online, was arrested this week in France. A notorious hacker convicted of perpetrating tens of thousands of cybercrimes, Kivimaki had been in hiding since October 2022, when he failed to show up in court and Finland issued an international warrant for his arrest. [...] According to the French news site, Kivimaki was arrested around 7 a.m. on Feb. 3, after authorities in Courbevoie responded to a domestic violence report. Kivimaki had been out earlier with a woman at a local nightclub, and later the two returned to her home but reportedly got into a heated argument. Police responding to the scene were admitted by another woman -- possibly a roommate -- and found the man inside still sleeping off a long night. When they roused him and asked for identification, the 6 3 blonde, green-eyed man presented an ID that stated he was of Romanian nationality. The French police were doubtful. After consulting records on most-wanted criminals, they quickly identified the man as Kivimaki and took him into custody. Kivimaki initially gained notoriety as a self-professed member of the Lizard Squad, a mainly low-skilled hacker group that specialized in DDoS attacks. But American and Finnish investigators say Kivimaki's involvement in cybercrime dates back to at least 2008, when he was introduced to a founding member of what would soon become HTP. Finnish police said Kivimaki also used the nicknames "Ryan", "RyanC" and "Ryan Cleary" (Ryan Cleary was actually a member of a rival hacker group -- LulzSec -- who was sentenced to prison for hacking). Kivimaki and other HTP members were involved in mass-compromising web servers using known vulnerabilities, and by 2012 Kivimaki's alias Ryan Cleary was selling access to those servers in the form of a DDoS-for-hire service. Kivimaki was 15 years old at the time. In 2013, investigators going through devices seized from Kivimaki found computer code that had been used to crack more than 60,000 web servers using a previously unknown vulnerability in Adobe's ColdFusion software. Multiple law enforcement sources told KrebsOnSecurity that Kivimaki was responsible for making an August 2014 bomb threat against former Sony Online Entertainment President John Smedley that grounded an American Airlines plane. That incident was widely reported to have started with a tweet from the Lizard Squad, but Smedley and others said it started with a call from Kivimaki. Kivimaki also was involved in calling in multiple fake bomb threats and "swatting" incidents -- reporting fake hostage situations at an address to prompt a heavily armed police response to that location.pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Microsoft Swears It's Not Coming For Your Data With Scan For Old Office Versions

Slashdot - Your Rights Online - Pn, 2023-02-06 20:40
Microsoft wants everyone to know that it isn't looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software. From a report: In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April). The company stressed that it would run only one time and would not install anything on the user's Windows system, adding that the file for the update is scanned to ensure it's not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it. The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won't access private data despite scanning their systems. The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user's systems once the scan is completed.pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Are Citywide Surveillance Cameras Effective?

Slashdot - Your Rights Online - Pn, 2023-02-06 02:05
The Washington Post looks at the effectiveness mdash; and the implications mdash; of "citywide surveillance" networks, including Memphis's SkyCop , "built on 2,100 cameras that broadcast images back to a police command center every minute of every day." Known for their blinking blue lights, the SkyCop cameras now blanket many of the city's neighborhoods, gas stations, sidewalks and parks. The company that runs SkyCop, whose vice president of sales previously worked for the Memphis police, promotes it as a powerful crime deterrent that can help "neighborhoods take back their streets." But after a decade in which Memphis taxpayers have paid $10 million to expand the surveillance system, crime in the city has gone up.... No agency tracks nationwide camera installation statistics, but major cities have invested heavily in such networks. Police in Washington, D.C., said they had deployed cameras at nearly 300 intersections by 2021, up from 48 in 2007. In Chicago, more than 30,000 cameras are viewable by police; in parts of New York City, the cameras watch every block. Yet researchers have found no substantive evidence that the cameras actually reduce crime.... In federal court, judges have debated whether round-the-clock police video recording could constitute an unreasonable search as prohibited by the Fourth Amendment. Though the cameras are installed in public areas, they also capture many corners of residential life, including people's doors and windows. "Are we just going to put these cameras in front of everybody's house and monitor them and see if anybody's up to anything?" U.S. Circuit Judge O. Rogeriee Thompson said during oral arguments for one such case in 2021.... Dave Maass, a director at the digital rights group Electronic Frontier Foundation who researches police surveillance technology, said these systems have expanded rapidly in the United States without real evidence that they have led to a drop in crime. "This often isn't the community coming in and asking for it, it's police going to conferences where ... vendors are promising the world and that they'll miraculously solve crimes," Maass said. "But it's just a commercial thing. It's just business." Nonetheless, the Post notes that in Memphis many SkyCop cameras are even outfitted "with license-plate recognition software that records the time and location of every passing car."pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

After Cracking Another 'Secure' Messaging App, European Police Arrest 42

Slashdot - Your Rights Online - So, 2023-02-04 20:34
Slashdot reader lexios shares this report from the French international news agency Agence France-Press: European police arrested 42 suspects and seized guns, drugs and millions in cash, after cracking another encrypted online messaging service used by criminals, Dutch law enforcement said Friday. Police launched raids on 79 premises in Belgium, Germany and the Netherlands following an investigation that started back in September 2020 and led to the shutting down of the covert Exclu Messenger service. After police and prosecutors got into the Exclu secret communications system, they were able to read the messages passed between criminals for five months before the raids, said Dutch police. Those arrested include users of the app, as well as its owners and controllers. Police in France, Italy and Sweden, as well as Europol and Eurojust, its justice agency twin, also took part in the investigation. The police raids uncovered at least two drugs labs, one cocaine-processing facility, several kilograms of drugs, four million euros in cash, luxury goods and guns, Dutch police said. The "secure" messaging app was used by around 3 000 people who paid 800 euros (roughly $866 USD) for a six-month subscription.pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'Secure'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Dashlane Publishes Its Source Code To GitHub In Transparency Push

Slashdot - Your Rights Online - So, 2023-02-04 03:25
Password management company Dashlane has made its mobile app code available on GitHub for public perusal, a first step it says in a broader push to make its platform more transparent. TechCrunch reports: The Dashlane Android app code is available now alongside the iOS incarnation, though it also appears to include the codebase for its Apple Watch and Mac apps even though Dashlane hasn't specifically announced that. The company said that it eventually plans to make the code for its web extension available on GitHub too. Initially, Dashlane said that it was planning to make its codebase "fully open source," but in response to a handful of questions posed by TechCrunch, it appears that won't in fact be the case. At first, the code will be open for auditing purposes only, but in the future it may start accepting contributions too --" however, there is no suggestion that it will go all-in and allow the public to fork or otherwise re-use the code in their own applications. Dashlane has released the code under a Creative Commons Attribution-NonCommercial 4.0 license, which technically means that users are allowed to copy, share and build upon the codebase so long as it's for non-commercial purposes. However, the company said that it has stripped out some key elements from its release, effectively hamstringing what third-party developers are able to do with the code. [...] "The main benefit of making this code public is that anyone can audit the code and understand how we build the Dashlane mobile application," the company wrote. "Customers and the curious can also explore the algorithms and logic behind password management software in general. In addition, business customers, or those who may be interested, can better meet compliance requirements by being able to review our code." On top of that, the company says that a benefit of releasing its code is to perhaps draw-in technical talent, who can inspect the code prior to an interview and perhaps share some ideas on how things could be improved. Moreover, so-called "white-hat hackers" will now be better equipped to earn bug bounties. "Transparency and trust are part of our company values, and we strive to reflect those values in everything we do," Dashlane continued. "We hope that being transparent about our code base will increase the trust customers have in our product."pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Judge Uses ChatGPT To Make Court Decision

Slashdot - Your Rights Online - So, 2023-02-04 01:20
An anonymous reader quotes a report from Motherboard: A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator -- or at least, the first time we know about it. Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document (PDF) dated January 30, 2023. "The arguments for this decision will be determined in line with the use of artificial intelligence (AI)," Garcia wrote in the decision, which was translated from Spanish. "Accordingly, we entered parts of the legal questions posed in these proceedings." "The purpose of including these AI-produced texts is in no way to replace the judge's decision," he added. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI." The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included "Is an autistic minor exonerated from paying fees for their therapies?" and "Has the jurisprudence of the constitutional court made favorable decisions in similar cases?" Garcia included the chatbot's full responses in the decision, apparently marking the first time a judge has admitted to doing so. The judge also included his own insights into applicable legal precedents, and said the AI was used to "extend the arguments of the adopted decision." After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Replika, a 'Virtual Friendship' AI Chatbot, Hit With Data Ban in Italy Over Child Safety

Slashdot - Your Rights Online - So, 2023-02-04 00:40
An anonymous reader shares a report: San Francisco-based AI chatbot maker, Replika -- which operates a freemium 'virtual friendship' service based on customizable digital avatars whose "personalized" responses are powered by artificial intelligence (and designed, per its pitch, to make human users feel better) -- has been ordered by Italy's privacy watchdog to stop processing local users' data. The Garante said it's concerned Replika's chatbot technology poses risks to minors -- and also that the company lacks a proper legal basis for processing children's data under the EU's data protection rules. Additionally, the regulator is worried about the risk the AI chatbots could pose to emotionally vulnerable people. It's also accusing Luka, the developer behind the Replika app, of failing to fulfil regional legal requirements to clearly convey how it's using people's data. The order to stop processing Italians' data is effective immediately. In a press release announcing its intervention, the watchdog said: "The AI-powered chatbot, which generates a 'virtual friend' using text and video interfaces, will not be able to process [the] personal data of Italian users for the time being. A provisional limitation on data processing was imposed by the Italian Garante on the U.S.-based company that has developed and operates the app; the limitation will take effect immediately."pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'Virtual+Friendship'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Former Ubiquiti Employee Pleads Guilty To Attempted Extortion Scheme

Slashdot - Your Rights Online - Pt, 2023-02-03 22:41
A former employee of network technology provider Ubiquiti pleaded guilty to multiple felony charges after posing as an anonymous hacker in an attempt to extort almost $2 million worth of cryptocurrency while employed at the company. From a report: Nickolas Sharp, 37, worked as a senior developer for Ubiquiti between 2018 and 2021 and took advantage of his authorized access to Ubiquiti's network to steal gigabytes worth of files from the company during an orchestrated security breach in December 2020. Prosecutors said that Sharp used the Surfshark VPN service to hide his home IP address and intentionally damaged Ubiquiti's computer systems during the attack in an attempt to conceal his unauthorized activity. Sharp later posed as an anonymous hacker who claimed to be behind the incident while working on an internal team that was investigating the security breach. While concealing his identity, Sharp attempted to extort Ubiquiti, sending a ransom note to the company demanding 50 Bitcoin (worth around $1.9 million at that time) in exchange for returning the stolen data and disclosing the security vulnerabilities used to acquire it. When Ubiquiti refused the ransom demands, Sharp leaked some of the stolen data to the public. The FBI was prompted to investigate Sharp's home around March 24th, 2021, after it was discovered that a temporary internet outage had exposed Sharp's IP address during the security breach. Further reading: Ubiquiti Files Case Against Security Blogger Krebs Over 'False Accusations'; Former Ubiquiti Dev Charged For Trying To Extort His Employer.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Kremlin's Tracking of Russian Dissidents Through Telegram Suggests App's Encryption Has Been Compromised

Slashdot - Your Rights Online - Pt, 2023-02-03 19:24
Russian antiwar activists placed their faith in Telegram, a supposedly secure messaging app. How does Putin's regime seem to know their every move? From a report: Matsapulina's case [anecdote in the story] is hardly an isolated one, though it is especially unsettling. Over the past year, numerous dissidents across Russia have found their Telegram accounts seemingly monitored or compromised. Hundreds have had their Telegram activity wielded against them in criminal cases. Perhaps most disturbingly, some activists have found their "secret chats" -- Telegram's purportedly ironclad, end-to-end encrypted feature -- behaving strangely, in ways that suggest an unwelcome third party might be eavesdropping. These cases have set off a swirl of conspiracy theories, paranoia, and speculation among dissidents, whose trust in Telegram has plummeted. In many cases, it's impossible to tell what's really happening to people's accounts -- whether spyware or Kremlin informants have been used to break in, through no particular fault of the company; whether Telegram really is cooperating with Moscow; or whether it's such an inherently unsafe platform that the latter is merely what appears to be going on.pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'s+Tracking+of+Russian+Dissidents+Through+Telegram+Suggests+App'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Documents Show Meta Paid For Data Scraping Despite Years of Denouncing It

Slashdot - Your Rights Online - Pt, 2023-02-03 02:02
An anonymous reader quotes a report from Engadget: Meta has routinely fought data scrapers, but it also participated in that practice itself -- if not necessarily for the same reasons. Bloomberg has obtained legal documents from a Meta lawsuit against a former contractor, Bright Data, indicating that the Facebook owner paid its partner to scrape other websites. Meta spokesperson Andy Stone confirmed the relationship in a discussion with Bloomberg, but said his company used Bright Data to build brand profiles, spot "harmful" sites and catch phishing campaigns, not to target competitors. Stone added that data scraping could serve "legitimate integrity and commercial purposes" so long as it was done legally and honored sites' terms of service. Meta terminated its arrangement with Bright Data after the contractor allegedly violated company terms when gathering and selling data from Facebook and Instagram. Neither Bright Data nor Meta is saying which sites they scraped. Bright Data is countersuing Meta in a bid to keep scraping Facebook and Instagram, arguing that it only collects publicly available information and respects both European Union and US regulations.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Anker Finally Comes Clean About Its Eufy Security Cameras

Slashdot - Your Rights Online - Pt, 2023-02-03 00:00
An anonymous reader quotes a report from The Verge: First, Anker told us it was impossible. Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn't answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams -- among other questions -- we would publish a story about the company's lack of answers. It worked. In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted -- they can and did produce unencrypted video streams for Eufy's web portal, like the ones we accessed from across the United States using an ordinary media player. But Anker says that's now largely fixed. Every video stream request originating from Eufy's web portal will now be end-to-end encrypted -- like they are with Eufy's app -- and the company says it's updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request. That's not all Anker is disclosing today. The company has apologized for the lack of communication and promised to do better, confirming it's bringing in outside security and penetration testing companies to audit Eufy's practices, is in talks with a "leading and well-known security expert" to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail. Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists. It's a little hard to take the company at its word! But we also think Anker Eufy customers, security researchers and journalists deserve to read and weigh those words, particularly after so little initial communication from the company. That's why we're publishing Anker's full responses [here]. As highlighted by Ars Technica, some of the notable statements include: - Its web portal now prohibits users from entering "debug mode." - Video stream content is encrypted and inaccessible outside the portal. - While "only 0.1 percent" of current daily users access the portal, it "had some issues," which have been resolved. - Eufy is pushing WebRTC to all of its security devices as the end-to-end encrypted stream protocol. - Facial recognition images were uploaded to the cloud to aid in replacing/resetting/adding doorbells with existing image sets, but has been discontinued. No recognition data was included with images sent to the cloud. - Outside of the "recent issue with the web portal," all other video uses end-to-end encryption. - A "leading and well-known security expert" will produce a report about Eufy's systems. - "Several new security consulting, certification, and penetration testing" firms will be brought in for risk assessment. - A "Eufy Security bounty program" will be established. - The company promises to "provide more timely updates in our community (and to the media!)."pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

GoodRx Leaked User Health Data To Facebook and Google, FTC Says

Slashdot - Your Rights Online - Cz, 2023-02-02 02:10
An anonymous reader quotes a report from The New York Times: Millions of Americans have used GoodRx, a drug discount app, to search for lower prices on prescriptions like antidepressants, H.I.V. medications and treatments for sexually transmitted diseases at their local drugstores. But U.S. regulators say the app's coupons and convenience came at a high cost for users: wrongful disclosure of their intimate health information. On Wednesday, the Federal Trade Commission accused the app's developer, GoodRx Holdings, of sharing sensitive personal data on millions of users' prescription medications and illnesses with companies like Facebook and Google without authorization. [...] From 2017 to 2020, GoodRx uploaded the contact information of users who had bought certain medications, like birth control or erectile dysfunction pills, to Facebook so that the drug discount app could identify its users' social media profiles, the F.T.C. said in a legal complaint. GoodRx then used the personal information to target users with ads for medications on Facebook and Instagram, the complaint said, "all of which was visible to Facebook." GoodRx also targeted users who had looked up information on sexually transmitted diseases on HeyDoctor, the company's telemedicine service, with ads for HeyDoctor's S.T.D. testing services, the complaint said. Those data disclosures, regulators said, flouted public promises the company had made to "never provide advertisers any information that reveals a personal health condition." The company's information-sharing practices, the agency said, violated a federal rule requiring health apps and fitness trackers that collect personal health details to notify consumers of data breaches. While GoodRx agreed to settle the case, it said it disagreed with the agency's allegations and admitted no wrongdoing. The F.T.C.'s case against GoodRx could upend widespread user-profiling and ad-targeting practices in the multibillion-dollar digital health industry, and it puts companies on notice that regulators intend to curb the nearly unfettered trade in consumers' health details. [...] If a judge approves the proposed federal settlement order, GoodRx will be permanently barred from sharing users' health information for advertising purposes. To settle the case, the company also agreed to pay a $1.5 million civil penalty for violating the health breach notification rule.pdiv class="share_submission" style="position:relative;" a class="slashpop" href=""img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p

Stable Diffusion 'Memorizes' Some Images, Sparking Privacy Concerns

Slashdot - Your Rights Online - Cz, 2023-02-02 00:50
An anonymous reader quotes a report from Ars Technica: On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles. However, Carlini's results are not as clear-cut as they may first appear. Discovering instances of memorization in Stable Diffusion required 175 million image generations for testing and preexisting knowledge of trained images. Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario. Also, the researchers note that the "memorization" they've discovered is approximate since the AI model cannot produce identical byte-for-byte copies of the training images. By definition, Stable Diffusion cannot memorize large amounts of data because the size of the 160,000 million-image training dataset is many orders of magnitude larger than the 2GB Stable Diffusion AI model. That means any memorization that exists in the model is small, rare, and very difficult to accidentally extract. Still, even when present in very small quantities, the paper appears to show that approximate memorization in latent diffusion models does exist, and that could have implications for data privacy and copyright. The results may one day affect potential image synthesis regulation if the AI models become considered "lossy databases" that can reproduce training data, as one AI pundit speculated. Although considering the 0.03 percent hit rate, they would have to be considered very, very lossy databases -- perhaps to a statistically insignificant degree. [...] Eric Wallace, one of the paper's authors, shared some personal thoughts on the research in a Twitter thread. As stated in the paper, he suggested that AI model-makers should de-duplicate their data to reduce memorization. He also noted that Stable Diffusion's model is small relative to its training set, so larger diffusion models are likely to memorize more. And he advised against applying today's diffusion models to privacy-sensitive domains like medical imagery.pdiv class="share_submission" style="position:relative;" a class="slashpop" href="'Memorizes'"img src=""/a a class="slashpop" href=""img src=""/a /div/ppa href=";utm_medium=feed"Read more of this story/a at Slashdot./p