Quantcast
Channel: legal | TechCrunch
Viewing all 109 articles
Browse latest View live

Artificial intelligence and the law

$
0
0
Photo: DAMIEN MEYER/AFP Creative/Getty Images Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars become human-like, as in artificial intelligence that can drive cars? Who is responsible for any laws that are violated by the AI? This article, written by a technologist and a lawyer, examines that future of AI law. Read More

Postmates now allows drivers to opt out of mandatory arbitration

$
0
0
The Postmates sign outside the office In fall 2015, the National Labor Relations Board filed a complaint against Postmates that challenged the legality of the company’s mandatory arbitration agreement between it and its contractors. Yesterday, Postmates updated its legal document to offer contractors a way to opt out of mandatory arbitration. Read More

Amazon will refund millions of unauthorized in-app purchases made by kids

$
0
0
 Amazon will refund millions of unauthorized in-app purchases kids made on mobile devices, having now dropped its appeal of last year’s ruling by a federal judge who sided with the Federal Trade Commission in the agency’s lawsuit against Amazon. The FTC’s original complaint said that Amazon should be liable for millions of dollars it charged customers, because of the way… Read More

Qualcomm countersuit claims Apple ‘refuses to acknowledge’ the value of its technology

$
0
0
 Apple filed a billion-dollar royalty lawsuit against Qualcomm in January and today the chip maker hit back with a legal case of its own against the iPhone maker which, it claims, has refused to acknowledge the value of its technology. Read More

Federal court decides that Adobe can’t stay under a gag order over search warrant forever

$
0
0
Adobe Systems world headquarters in downtown San Jose, California According to newly unsealed documents, a federal court in California ruled that it is unlawful for Adobe to remain under an indefinite gag order regarding a search warrant for one of its users. In the ruling, the Los Angeles court concluded that the government had not made a sufficient argument to support the ongoing nature of a gag order it issued to Adobe in 2016. Read More

The hunted becomes the hunter: How Cloudflare’s fight with a ‘patent troll’ could alter the game

$
0
0
 Matthew Prince knew what was coming. The CEO of Cloudflare, an internet security company and content delivery network in San Francisco, was behind his desk when the emails began. College classmates-turned-defense attorneys were reaching out to say hello and to ask: did Prince perhaps need help to fight a lawsuit they’d seen filed against Cloudflare? Read More

Microsoft is quietly fighting a clever war against Russian hacking group Fancy Bear

$
0
0
 While the White House mulls striking up a joint cyber program with Russia, an unlikely vigilante is taking care of business. As the Daily Beast reports, Microsoft has been waging a quiet war against the hacking entity known as Fancy Bear, which is believed to be associated with the GRU, Russia’s covert military intelligence agency. Read More

Snopes seeks crowdfunding in ownership battle

$
0
0
 How many times have you heard some urban legend, chain letter or misleading bit of news repeated and immediately found a thorough, fact-based debunking on Snopes? Like every damn day for the last 20 years or so, right? Snopes was there for you when you were looking up fake news and cryptids — but it’s in trouble, and asking you to return the favor. Read More

Blue Apron faces lawsuit from former employee who alleges violation of Family and Medical Leave Act

$
0
0
 On the heels of Blue Apron’s lackluster debut on the public market, a former employee is suing the company for allegedly violating the Family and Medical Leave Act, which requires that companies provide employees with job-protected, unpaid leave for certain medical and family reasons. Read More

Judge rules Anthony Levandowski can be called to testify in Uber/Waymo trial

$
0
0
 In the latest hearing to define the scope of the upcoming trial between self-driving technology rivals Waymo and Uber, District Judge William Alsup said Anthony Levandowski, the star engineer at the center of the affair, could be called to testify in court. Read More

The CRISPR patent battle is back on as UC Berkeley files an appeal

$
0
0
 The University of California Berkeley has filed an appeal in a heated CRISPR patent interference case that earlier this year ruled in favor of the Broad Institute of MIT and Harvard. Read More

A disappointed Pokémon GO Fest attendee has proposed class-action lawsuit against Niantic

$
0
0
 For many, Pokémon GO is so summer 2016. But at least 20,000 people are still die-hard fans of the game, as proven by their participation in Pokémon GO Fest in Chicago’s Grant Park last week. Unfortunately, the event was a disappointment to many attendees, who could not get data service and ended up waiting on long lines without the AR-based game to keep them entertained. And beyond… Read More

Theranos has settled its lawsuit with Walgreens

$
0
0
 Blood testing startup Theranos has reached a settlement with former customer Walgreens. According to the terms of the agreement, Walgreens will dismiss its lawsuit against Theranos “with no finding or implication of liability.” The terms of the settlement are confidential. Read More

Amazon adds parental consent to Alexa skills aimed at children, launches first legal kids’ skills

$
0
0
 Amazon today is launching the first Alexa skills specifically aimed at children, which will go live with a new Verified Parental Consent feature in order to operate within the confines of child data protections, like the Children’s Online Privacy Protection Act (COPPA). This move opens the door to larger children’s media brands who have, until now, avoided building apps for… Read More

Family-friendly streaming service VidAngel refuses to shut down, despite court order

$
0
0

Family friendly streaming service VidAngel is refusing to shut down, in defiance of a court order issued earlier in December. The service, which alters copyrighted material in order to remove adult language, violence and nudity, was found to be in violation of the law and ordered to temporarily stop circumventing copyright protections, copying copyrighted materials, and streaming them over the internet.

Warner Bros., Disney and Fox studios had won a preliminary injunction against Utah-based VidAngel, after arguing that it was effectively operating as an unlicensed video-on-demand streaming service. U.S. District Judge Andre Birotte Jr. agreed, saying that VidAngel didn’t have the right to hack the DVDs to remove copyright and stream them to customers, because the digital content it was streaming doesn’t come from an authorized copy.

VidAngel has tried to find loopholes in the law to make its service work. Its business model involves selling DVDs for $20 to customers, then buying them back for $19 after they’re viewed. That means it’s effectively offering $1 movie rentals. In the meantime, it removes the copyright protection the DVDs and Blu-rays in order to censor its content before streaming the movie over the internet to customers.

The company said it had the right to offer customers a private performance of the movie, and that the Family Home Movie Act of 2005 allowed the use of technology to censor the DVDs. But the judge said the Movie Act requires an “authorized copy” of the film, which VidAngel doesn’t have. In addition, the judge said that even if VidAngel created a valid ownership interest in the DVD, it would only apply to the physical disk – not the content it’s streaming from its servers. (VidAngel has been streaming from a master copy of the movie, not the DVD the customer temporarily owned).

The studios say they’re now bringing VidAngel’s defiance of the injunction to the court’s attention.

In a joint statement from Warner Bros, Disney and Fox, the studios said the following:

“Defying last week’s injunction, VidAngel continues to illegally stream our content without a license and is expanding its infringement by adding new titles. We have brought VidAngel’s indefensible violation of the injunction to the court’s attention. As the court made clear in its order, VidAngel’s unauthorized acts of ripping, copying and streaming our movies and TV shows infringe copyright and violate the Digital Millennium Copyright Act. VidAngel’s filtering of content has nothing to do with the claims against it and does not excuse its illegal activities.”

In a new filing, the studios note that not only has VidAngel refused to shut down, it’s also continuing to add new releases to its service. This includes Warner Bros.’s “Sully” and “Storks” as well as Fox’s “Miss Peregrine’s Home for Peculiar Children.”

These titles were not released on DVD until after the preliminary injunction was issued. And, as with other titles VidAngel offers, their release on the service is undercutting the studios’ windowing system. That is, movies generally first become available for rent or purchase, before they’re made available for streaming. VidAngel does not have streaming deals with the studios.

The studios now want VidAngel held in contempt.

“If VidAngel will not comply with the Preliminary Injunction immediately,” the studios said in a filing with the U.S. District Court of California, Western Division. “Plaintiffs will have no option other than to move ex parte for an order to show cause why VidAngel should not be held in contempt.”

VidAngel, of course, is appealing the preliminary injunction, with plans to fund its court battles thanks to money raised from its customers through crowdfunding.

However, its application to stay the preliminary injunction doesn’t give it the right to continue its business in the meantime, the studios argue.

A video statement from VidAngel (see below), indicates the company’s plans to take the battle all the way to the Supreme Court.

VidAngel Special Announcement

VidAngel Special Announcement:VidAngel will continue the legal fight for filtering, while also launching original family-friendly content.

Posted by VidAngel on Tuesday, December 13, 2016


Artificial intelligence and the law

$
0
0

Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars become human-like, as in artificial intelligence that can drive cars? Who is responsible for any laws that are violated by the AI?

This article, written by a technologist and a lawyer, examines that future of AI law.

The field of AI is in a sort of renaissance, with research institutions and R&D giants pushing the boundaries of what AI is capable of. Although most of us are unaware of it, AI systems are everywhere, from bank apps that let us deposit checks with a picture, to everyone’s favorite Snapchat filter, to our handheld mobile assistants.

Currently, one of the next big challenges that AI researchers are tackling is reinforcement learning, which is a training method that allows AI models to learn from its past experiences. Unlike other methods of generating AI models, reinforcement learning lends itself to be more like sci-fi than reality. With reinforcement learning, we create a grading system for our model and the AI must determine the best course of action in order to get a high score.

Research into complex reinforcement learning problems has shown that AI models are capable of finding varying methods to achieve positive results. In the years to come, it might be common to see reinforcement learning AI integrated with more hardware and software solutions, from AI-controlled traffic signals capable of adjusting light timing to optimize the flow of traffic to AI-controlled drones capable of optimizing motor revolutions to stabilize videos.

How will the legal system treat reinforcement learning? What if the AI-controlled traffic signal learns that it’s most efficient to change the light one second earlier than previously done, but that causes more drivers to run the light and causes more accidents?

Traditionally, the legal system’s interactions with software like robotics only finds liability where the developer was negligent or could foresee harm. For example, Jones v. W + M Automation, Inc., a case from New York state in 2007, did not find the defendant liable where a robotic gantry loading system injured a worker, because the court found that the manufacturer had complied with regulations.

It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions.

But in reinforcement learning, there’s no fault by humans and no foreseeability of such an injury, so traditional tort law would say that the developer is not liable. That certainly will pose Terminator-like dangers if AI keeps proliferating with no responsibility.

The law will need to adapt to this technological change in the near future. It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions, given personhood and hauled into court. That would assume that the legal system, which has been developed for over 500 years in common law and various courts around the world, would be adaptable to the new situation of an AI.

An AI by design is artificial, and thus ideas such as liability or a jury of peers appears meaningless. A criminal courtroom would be incompatible with AI (unless the developer is intending to create harm, which would be its own crime).

But really the question is whether the AI should be liable if something goes wrong and someone gets hurts. Isn’t that the natural order of things? We don’t regulate non-human behavior, like animals or plants or other parts of nature. Bees aren’t liable for stinging you. After considering the ability of the court system, the most likely reality is that the world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, such as through a technical standard mandated by treaty or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.

This likely will mean convening a group of leading AI experts, such as OpenAI, and establishing a standard that includes explicit definitions for neural network architectures (a neural network contains instructions to train an AI model and interpret an AI model), as well as quality standards to which AI must adhere.

Standardizing what the ideal neural network architecture should be is somewhat difficult, as some architectures handle certain tasks better than others. One of the biggest benefits that would arise from such a standard would be the ability to substitute AI models as needed without much hassle for developers.

Currently, switching from an AI designed to recognize faces to one designed to understand human speech would require a complete overhaul of the neural network associated with it. While there are  benefits to creating an architecture standard, many researchers will feel limited in what they can accomplish while sticking to the standard, and proprietary network architectures might be common even when the standard is present. But it is likely that some universal ethical code will emerge as conveyed by a technical standard for developers, formally or informally.

The concern for “quality,” including avoidance of harm to humans, will increase as we start seeing AI in control of more and more hardware. Not all AI models are created the same, as two models created for the same task by two different developers will work very differently from each other. Training an AI can be affected by a multitude of things, including random chance. A quality standard ensures that only AI models trained properly and working as expected would make it into the market.

For such a standard to actually have any power, we will most likely need some sort of government interference, which does not seem too far off, considering recent talks in British parliament regarding the future regulation of AI and robotics research and applications. Although no concrete plans have been laid out, parliament seems conscious of the need to create laws and regulations before the field matures. As stated by the House of Commons Science and Technology Committee, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.

Postmates now allows drivers to opt out of mandatory arbitration

$
0
0

In fall 2015, the National Labor Relations Board filed a complaint against delivery service Postmates that challenged the legality of the company’s mandatory arbitration agreement between it and its contractors. In Postmates’ fleet agreement, which contractors must sign as a condition of hire, the company had required that workers settle disagreements through arbitration. In other words, workers were asked to waive their rights to pursue collective actions, like a class action suit, for example.

Yesterday, Postmates updated its legal document to offer contractors a way to opt out of mandatory arbitration. The company confirmed the change was made on Thursday, but denies it’s related to the NLRB case, which is still pending.

“Like our terms of service, we regularly update this agreement so it’s in line with our business needs,” a Postmates spokesperson stated.

The NLRB’s case against Postmates, originally filed in October, 2015, is broader than the mandatory arbitration issue.

According to a court filing, an unnamed customer service representative told the NLRB that they had been instructed not to discuss terms and conditions of employment, including safety issues, with other employees.

Indirectly, the case brought up another question, as well: whether or not Postmates’ drivers were considered employees. The delivery service — like others in the on-demand space such as Uber or Lyft, for example — considers its workers independent contractors, not employees. Most gig economy employers go this route because it means they won’t have to offer the workers the same level of benefits, like healthcare or overtime.

The fact that the NLRB got involved with Postmates indicates that it believes the contractors to be employees. In fact, a press release from the NLRB’s office in Chicago referred to the workers as “employee drivers.”

Postmates had earlier responded to the NLRB’s complaint back in October, 2016 by denying all allegations and requested the court to dismiss the case in its entirety.

However, one of the actions the NLRB had requested of Postmates in its original complaint was to drop its mandatory arbitration clause, which it described as “unlawful,” and alert all employees of the rescission.

Postmates did not drop the mandatory arbitration clause, exactly, in the agreement updated yesterday, but it did give the employees the means to opt out.

In a newly added section, the company explains that contractors have the right to opt out of arbitration, and arbitration is no longer a mandatory requirement for working with the company.

The new section reads as follows, in part:

Right to Opt Out of Arbitration. Arbitration is not a mandatory condition of Contractor’s contractual relationship with Postmates, and therefore Contractor may submit a statement notifying Postmates that Contractor wishes to opt out of this Mutual Arbitration Provision. 

The section continues to detail how the contractor can opt out via email or postal mail, and the time frame allowed for that action. It then states that contractors have the right to consult with an attorney, at their own expense, and says that class action waivers will still be enforced in arbitration. (Any contractor that doesn’t opt out is waiving their rights to take dispute to courts.)

The case itself between the NLRB and Postmates is still pending, but Postmates filed on February 10th a motion for abeyance, which is a request to put the case on a temporary hold. (The motion itself is not available, and requires an FOIA request to retrieve it. The NLRB confirmed the nature of the motion with TechCrunch, but could not comment on the details.)

Postmates’ claim that the modification to the contractor agreement is not related to the NLRB case seems suspect, due to the timing. The motion was filed on the 10th, then a week later, the contractor agreement is modified. Possibly, the company hopes to use the modified agreement as a reason why the case should be dismissed.

Amazon will refund millions of unauthorized in-app purchases made by kids

$
0
0

Amazon will refund millions of unauthorized in-app purchases kids made on mobile devices, having now dropped its appeal of last year’s ruling by a federal judge who sided with the Federal Trade Commission in the agency’s lawsuit against Amazon. The FTC’s original complaint said that Amazon should be liable for millions of dollars it charged customers, because of the way its Appstore software was designed – that is, it allowed kids to spend unlimited amounts of money in games and other apps without requiring parental consent.

The FTC had previously settled with both Apple and Google on similar charges, before turning its sights to Amazon.

The issue had to do with the way the Amazon Appstore’s in-app purchasing system worked. The Amazon Appstore is the store that comes preloaded on Amazon mobile devices, like Kindle Fire tablets, for example, though there is a way to load it onto other Android devices, too.

Of course, many kids’ game developers notoriously try to blur the lines between what’s free and paid. They also often design games in a way that they only fully function when kids use in-game items, which can be sometimes earned through gameplay or other times purchased through the app itself. Kids are pushed to buy these things regularly – as any parent can tell you, having experienced their kids’ begging for these items.

But in Amazon’s Appstore, which launched back in 2011, the company didn’t originally require passwords on in-app purchases. This allowed kids to buy coins and other items to their hearts’ content. One particularly awful example involved a game called “Ice Age Village” that offered an in-app purchase of $99.99.

Amazon introduced password-protected in-app purchases in March 2012, but then only on those where the purchase exceeded $20. In early 2013, it updated the system again to require passwords, but also allowed a 15-minute window afterwards where no password was required. The FTC said Amazon didn’t obtain “informed consent” until July 2014.

To make matters worse, parents complaining weren’t told how to get a refund and Amazon had even suggested at times that refunds weren’t possible, the FTC’s complaint had said.

Amazon and the FTC have now agreed to end appeals related to the earlier ruling, the FTC announced on Tuesday, April 4. Another issue that had come into play was the FTC’s request for an injunction to forbid Amazon from similar conduct in the future. The court denied that injunction, the FTC appealed, and Amazon cross-appealed the ruling that said Amazon had violated the law.

Now the two parties have agreed to end their litigation and begin the refund process.

More than $70 million in in-app charges made between November 2011 and May 2016 may be eligible for refunds, the FTC notes. It’s not likely that all affected customers will take the time to make their requests, however.

Amazon has not yet announced how the refund program will operate or when it launches but these details are to come shortly, says the FTC.

Qualcomm countersuit claims Apple ‘refuses to acknowledge’ the value of its technology

$
0
0

Apple filed a billion-dollar royalty lawsuit against Qualcomm in January and today the chip maker hit back with a legal case of its own against the iPhone maker which, it claims, has refused to acknowledge the value of its technology.

In the original suit, Apple claimed that Qualcomm — which makes money from licensing its patents to technology makers — was charging it for patents “they have nothing to do with.” That, Apple said, included TouchID, display panels and camera components. Apple also accused Qualcomm of inflating prices through its licensing model.

“Apple could not have built the incredible iPhone franchise that has made it the most profitable company in the world, capturing over 90 percent of smartphone profits, without relying upon Qualcomm’s fundamental cellular technologies. Now, after a decade of historic growth, Apple refuses to acknowledge the well established and continuing value of those technologies,” Qualcomm executive VP and general counsel Don Rosenberg said in a statement.

In more specific details, Qualcomm claimed Apple had “breached agreements and mischaracterized agreements and negotiations with Qualcomm,” adding that the phone maker “interfered” with its manufacturing partners that build the iPhone and iPad and use Qualcomm licenses.

Things get more dramatic with the iPhone 7. There are different versions of the device in the U.S. running modem chips from both Intel and Qualcomm. Apple had previously suggested that the Intel-powered devices were superior and were therefore being pared back to ensure that customers had the same quality regardless of the internals.

Qualcomm has, unsurprisingly, not taken too keenly to that. The firm said that Apple “chose not to utilize the full performance” and then “misrepresented the performance disparity” between it and Intel’s tech. Further, Qualcomm claimed Apple threatened it from going public to explain that its tech was actually behind the superior iPhone.

“[Apple] has launched a global attack on Qualcomm and is attempting to use its enormous market power to coerce unfair and unreasonable license terms from Qualcomm. We intend to vigorously defend our business model, and pursue our right to protect and receive fair value for our technological contributions to the industry,” Rosenberg, Qualcomm’s general counsel, added.

Federal court decides that Adobe can’t stay under a gag order over search warrant forever

$
0
0

According to newly unsealed documents, a federal court in California ruled that it is unlawful for Adobe to remain under an indefinite gag order regarding a search warrant for one of its users.

In the ruling, the Los Angeles court concluded that the government had not made a sufficient argument to support the ongoing nature of a gag order it issued to Adobe in 2016. Effectively, Adobe successfully argued that these gag orders should come with an expiration date, after which the company can disclose federal investigations into its users as part of routine transparency reporting.

As the case states:

“As written, the NPO [notice preclusion order] at issue herein effectively bars Adobe’s speech in perpetuity. The government does not contend, and has made no showing, that Adobe’s speech will threaten the investigation in perpetuity. Therefore, as written, the NPO manifestly goes further than necessary to protect the government’s interest.”

When the government goes snooping for the digital footprints that software users leave behind, tech companies are increasingly silenced under indefinite gag orders, which prevent them from notifying users about government requests for user data. As in Adobe’s case, these orders are not publicly known until a company prevails against them in court.

Adobe isn’t alone in opposing gag orders. Last month, Cloudflare and CREDO Mobile were finally allowed to identify themselves as recipients of national security letters (NSLs) that similarly prevented them from disclosing the fact that they had ever received such a letter to begin with. In April, Microsoft took the Justice Department to court to challenge one such gag order on constitutional grounds.

Critics like the Electronic Frontier Foundation argue that because they are issued without judicial review and in total secrecy, indefinite gag orders imposed along with national security letters violate the First Amendment.

Additionally, for tech companies prevented from disclosing search warrants on their users, the gag orders impair transparency and can damage user trust. In January, Cloudflare attorney Kenneth Carter detailed how a national security letter from 2013 hindered the company’s efforts to engage effectively in public policy advocacy.

As the government continues to secretly press tech companies for private data on their users, Adobe’s win is a meaningful milestone, but the fight for transparency is far from over.

Viewing all 109 articles
Browse latest View live




Latest Images