Quantcast
Channel: legal | TechCrunch

InCloudCounsel raises $200M, rebrands as Ontra to expand its automation tools for contract management

$
0
0

A typical enterprise grapples with hundreds or thousands of agreements, contracts and other legal documents every year, and it usually engages costly legal counsel either inside or outside the company to assess those documents on their behalf. Now, a startup called InCloudCounsel that is part RPA and part BPO — it has built tools to both automate and, in some cases, outsource this work — is announcing a Series B round of $200 million, money that it will be using to meet demand for its services.

Alongside this, the startup is rebranding to Ontra — signifying “getting to the heart of contracts”, according to the startup’s CEO and founder Troy Pospisil.

The round is being led by Blackstone Growth (the growth equity business of the investment giant), with previous backer Battery Ventures and board member Mike Paulus (who previously ran and sold Assurance IQ to Prudential, and before that was an investor with Andreessen Horowitz). Valuation is not being disclosed, but prior to this, in July 2019, the company raised $40 million in a round led by Battery — a sum that had previously been undisclosed until today.

The company is not talking about its valuation, but as a marker of what kinds of companies Ontra is targeting and working with, Blackstone is a strategic and financial investor in this round: Blackstone MD Paul Morrissey tells me that the firm is a strong user of Ontra’s tech to filter through thousands of NDAs and other contracts that it issues and receives every year, where the technology uses AI techniques like NLP to essentially guard Blackstone’s interests, scanning documents for items that are unusual or might need modifying ahead of being signed, and then passing on those documents to human lawyers for final checks. In all, Ontra is processing some 20,000 NDAs monthly at the moment.

Pospisil said he first came up with the idea for Ontra when he was working as an investment specialist at HIG Capital, where he saw how the need to triage large volumes of contracts would slow down deals and other important transactions that the company was making. HIG would have a large legal counsel on board to handle work, but even so it needed to rely on outside organizations to complement and supplement that.

“Even simple contracts needed to be recorded and catalogued,” he recalled. “We would send these to a law firm, or we might staff up a team, but no one was using any technology to track what they were doing. There was no visibility into what is where, whether data is lost in the process.” This covered long, complex agreements; important contracts and credit agreements, he said. “All these were incredibly complicated.”

It was 2014, and he was living in San Francisco and seeing a lot of interesting business models such as Uber’s emerging that gave him an idea to, in his words, “combine tech and a labor model to solve the problem.”

Ontra’s approach thus encompasses two parts. First, tapping into innovations were happening in the wider world of document management processing, it uses natural language and other algorithmic techniques to speed up the processing of large amounts of documentation faster and more efficiently than a human might do it. You might think of this as akin to a kind of very specialized version of robotic process automation.

That document processing work, in turn, is then handed over to the second part of the Ontra platform, its crowdsourced network of lawyers. Essentially, following the Uber and others’ model of bringing in gig workers to handle jobs on demand, Ontra has amassed a team of legal professionals — typically corporate lawyers — who either provide some hours to Ontra in a side-gig if they are not already working full time, or in some cases as a small way to earn money using their skill at a time when they may be doing something else (or, not working at all). Pospisil said that in cases where a company might have their own in-house teams, they might also use Ontra just for the processing side to expedite work, but as a general rule, customers are taking the full package.

“This is like payroll,” he said. “You don’t want to be doing it internally if it’s not strategically important to what you want to be doing as a business.”

Ontra interestingly has not yet come up against legal firms that might see it as competition.

“We were a little worried that we would be considered competition to the law firms, but they really like us,” he said. “They don’t want to be doing this work, either. They are focusing on larger M&A transactions and supporting the company. The world has this weird idea that legal work always needs to be sent to a law firm, but it’s not always efficient to do that, and lawyers are too expensive to do that. It’s not right-sized.”

Morrissey at Blackstone believes that the future lies in continuing to pursue the two sides of the business model, with the lawyer network an important complement to what is at heart a tech company, with more software being added all the time.

“It’s hard to underestimate the tech they have built,” he said, which is partly there in aid of making its network of lawyers much more efficient. “It means Ontra is also improving the workflow for them, with analytics that essentially say ‘focus on this’ and not other things in a contract.” The company also offers a full invoicing platform for the legal market, he said, “similar to the piece Uber has built for consumers so that drivers get paid and don’t have to worry about anything else.”


Apple offers $30 million to settle off-the-clock bag search controversy

$
0
0

Last year, California’s supreme court ruled that Apple broke the law by failing to pay employees while they waited for mandatory bag and iPhone searches. Now, Apple has offered to pay $30 million to settle the suit and lawyers for the employees have urged them to accept it, Apple Insider has reported. “This is a significant, non-reversionary settlement reached after nearly eight years of hard-fought litigation,” wrote plaintiff attorney Lee Shalov in the proposed settlement seen by Courthouse News.

Employees launched the suit way back in 2013, saying they weren’t paid while being searched for stolen merchandise or trade secrets. The workers felt they were still under Apple’s “control” during that five to 20 minute process and should therefore be compensated. Apple in turn argued that the employees could choose not to bring their bags or iPhones, thus avoiding a search in the first place.

Apple won an earlier battle in district court, but the case went to the California Supreme Court on appeal. There, the judges ruled that Apple workers were “clearly under Apple’s control while awaiting, and during, the exit searches.” The court dismissed Apple’s argument that bringing a bag to work was an employee convenience, particularly that Apple felt employees didn’t necessarily need to bring their iPhones to work.

“Its characterization of the iPhone as unnecessary for its own employees is directly at odds with its description of the iPhone as an ‘integrated and integral’ part of the lives of everyone else,” the judges wrote. In that statement, the court referenced a 2017 Tim Cook interview where he stated that the iPhone was “so so integrated and integral to our lives, you wouldn’t think about leaving home without it.”

The settlement is still subject to approval by the plaintiffs. Nearly 12,000 current and former Apple Store employees in California involved in the lawsuit stand to receive a maximum payment of around $1,200.

Editor’s note: This article originally appeared on Engadget.

Avi Dorfman’s legal battle to be named founding member of Compass has ended in a settlement

$
0
0

Entrepreneur Avi Dorfman, who sued Compass seven years ago for not recognizing him as a co-founder, received a settlement and key acknowledgement today from the now-public real estate company Compass. The result comes just months before an expected trial date for the ongoing lawsuit, in which Dorfman sought a $200 million stake in the company to represent his expertise and role in shaping the business, according to court filings.

“I am pleased to have been recognized as a member of Compass’s founding team and to have resolved this dispute in a manner that is satisfactory to both sides. I wish [CEO] Rob Reffkin and the Compass community only the best,” Dorfman said in a statement to TechCrunch.

In a statement, a Compass spokesperson gave acknowledgement to “Mr. Dorfman’s work in the early days of Compass as a founding team member of the company.”

Compass, alongside its recent earnings report, posted an SEC filing that indicated a charge of $21.3 million in connection with the settlement, but the total sum remains undisclosed.

Dorfman filed the lawsuit in 2014, two days after Compass announced its new $360 million valuation. Today, Compass is worth around $4.4 billion.

“Despite the company’s astronomical success due, in part, to Dorfman’s significant contributions towards conceptualizing, creating, and launching the company, Dorfman was intentionally and wrongfully cut-out,” the original complaint from Dorfman said. The lawsuit also alleged that Compass misappropriated trade secrets from RentJolt, Dorfman’s company before he began working on Compass, in violation of a non-disclosure agreement.

For Compass’s part, the company argued that the entrepreneur’s lawsuit was coming from a more opportunistic place. In its motion for summary judgment, the company wrote: “Having spurned multiple offers of employment to join Reffkin’s new real estate venture, Dorfman now seeks a do-over of that decision, claiming he should be awarded tens of millions of dollars in equity in Compass — far in excess of what he could have earned if he had actually chosen to join that venture and invested his time and energy building it into the successful company it is today.”

In a summary judgement, Compass claimed that it changed its business model and “transformed from a start-up venture into a successful real estate technology company.”

This lawsuit illustrates the complex, and increasingly common, tension that can arise between founding team members during the earliest, garage-stage days of a business. Decisions can be made over drinks, ideas over a single chat, and contracts are often a step made far later into a process. Other prominent founder disputes include the Winklevoss twins versus Facebook’s Mark Zuckerberg and Reggie Brown versus Snapchat. 

Arun Subramanian, a partner at Susman Godfrey who worked for Dorfman on this case, spoke to TechCrunch about the growing threat of founder disputes in today’s frenetic funding and startup formation environment.

“We’re in an ideas economy, and so there are more and more companies and successful ventures that are premised on ideas as opposed to products,” he said, compared to a few decades ago. He added that it may have been easy to understand who at Pepsi is responsible for selling a certain number of Pepsi cans per year, but now, the questions around impact are more elusive, such as: who came up with the idea for Facebook?

“As we shift into the ideas economy, it is natural that we will have more of these issues come up where a particular individual says ‘I played a role in the development of this technology, but in the infancy of the business you kind of left me on the sidelines and now, that technology, those ideas, that I helped build are extremely lucrative.”

Today, Dorfman is the founder of Clearing, a direct-to-consumer digital health startup working on chronic pain that has raised $20 million in funding to date. Compass has had a rocky time on the stock market since its debut last April, with its stock price down nearly 40% compared to its opening price.

Meta files federal lawsuit to uncover individuals running a phishing scam on its platforms

$
0
0

Meta, formerly known as Facebook, announced today that it has filed a federal lawsuit in California court to take action to uncover individuals running a phishing scam. The company says the legal action aims to disrupt phishing attacks that are designed to trick people into sharing their login credentials on fake login pages for Facebook, Messenger, Instagram and WhatsApp.

For context, phishing attacks lure unsuspecting victims to websites that appear legitimate but are actually deceptively fake. The websites then persuade victims to enter their sensitive information, such as passwords and email addresses. Meta says it found more than 39,000 websites that are impersonating the login pages of Facebook, Messenger, Instagram and WhatsApp as part of the phishing scheme. It also notes that reports of phishing attacks have been on the rise and that it is filing this lawsuit to take legal action against these attacks.

“On these websites, people were prompted to enter their usernames and passwords, which Defendants collected,” Jessica Romero, Meta’s director of platform enforcement and litigation, wrote in a blog post. “As part of the attacks, Defendants used a relay service to redirect internet traffic to the phishing websites in a way that obscured their attack infrastructure. This enabled them to conceal the true location of the phishing websites, and the identities of their online hosting providers and the defendants.”

Romero says that in March, Meta started working with the relay service to suspend thousands of URLs that hosted the phishing websites. Meta plans to continue to collaborate with online service providers to disrupt phishing attacks. It notes that it works to proactively block instances of abuse to the security community, domain name registrars and others. The company says it also shares phishing URLs so other platforms can block them as well.

“This lawsuit is one more step in our ongoing efforts to protect people’s safety and privacy, send a clear message to those trying to abuse our platform, and increase accountability of those who abuse technology.” Romero wrote in the blog post.

Meta’s latest lawsuit isn’t the first time that the company has cracked down on phishing scams on its platforms. Last month, Meta revealed that it took action against four several groups of hackers from Syria and Pakistan. The groups used phishing links to manipulate users into giving up their Facebook credentials. Earlier this year in March, the company also took action against a group of hackers in China known as Earth Empusa or Evil Eye. Meta, which was known as Facebook at the time, said it disrupted the hackers’ ability to use their infrastructure to abuse its platform. The company also took similar action against hackers in Bangledesh and Vietnam in 2020.

What hostile takeovers are (and why they’re usually doomed)

$
0
0

Thanks to the machinations of a certain billionaire, the phrase “hostile takeover” has been liberally bandied about the media sphere recently. But while it long ago entered the mainstream lexicon, “hostile takeover” carries with it an air of vagueness — and legalese opacity.

At a high level, a hostile takeover occurs when a company — or a person — attempts to take over another company against the wishes of the target company’s management. That’s the “hostile” aspect of a hostile takeover — merging with or acquiring a company without the consent of that company’s board of directors.

How it usually goes down is, a company — let’s call it “Company A” — submits a bid offer to purchase a second company (“Company B”) for a (reasonable) rate. Company B’s board of directors rejects the offer, determining it to not be in the best interest of shareholders. But Company A attempts to force the deal, opting for one of several strategies: A proxy vote, a tender offer or a large stock purchase.

The proxy vote route involves Company A persuading shareholders in Company B to vote out Company B’s opposing management. This might entail making changes to the board of directors, like installing members who explicitly support the takeover.

It’s not necessarily easy street. Aside from the challenge of rallying shareholder support, proxy solicitors — the specialist firms hired to help gather proxy votes — can challenge proxy votes. This extends the takeover timeline.

That’s why an acquirer might instead make a tender offer. With a tender offer, Company A offers to purchase stock shares from Company B shareholders at a price higher than the market rate (e.g., $15 a share versus $10), with the goal of acquiring enough voting shares to have a controlling interest in Company B (typically over 50% of the voting stock).

Tender offers tend to be costly and time consuming. By U.S. law, the acquiring company is required to disclose its offer terms, the source of its funds and its proposed plans if the takeover is successful. The law also sets deadlines by which shareholders must make their decisions, and it gives both companies ample time to state their cases.

Alternatively, Company A could attempt to buy the necessary voting stock in Company B in the open market (a “toehold acquisition”). Or they could make an unsolicited offer public, a mild form of pressure known as a “bear hug.”

A short history of hostile takeover attempts

Hostile takeovers constitute a significant portion of overall merger and acquisition (M&A) activity. For example, in 2017, hostile takeovers reportedly accounted for $575 billion worth of acquisition bids — about 15% of that year’s total M&A volume.

But how successful are hostile takeovers, typically? According to a 2002 CNET article, between 1997 and 2002, target companies in the U.S. across all industries fended off 30% to 40% of the roughly 200 takeover attempts while 20% to 30% agreed to be purchased by “white knight” companies. In the context of a hostile takeover, a “white knight” is a friendly investor that acquires a company with support from the target company’s board of directors when it’s facing a hostile acquisition.

Confined to the past two decades or so, the tech industry hasn’t seen an outsized number of hostile takeover attempts. That’s partly because — as the CNET piece notes — the value of tech companies is often tied to the expertise of its workers. As evidenced this month, hostile takeovers tend not to have positive social ramifications for the target’s workforce. The distraction and lingering uncertainty from a hostile action could lead to a flight of talent at both the top and middle levels.

During the same time frame referenced earlier — 1997 to 2002 — there were only nine hostile takeover attempts against tech companies. Four were successful, including AT&T’s buyout of enterprise service provider NCR and IBM’s purchase of software developer Lotus.

Hostile takeovers in the tech industry in recent years have been higher in profile — but not necessarily more fruitful.

Take Xerox and Hewlett-Packard, for example. In November 2019, Xerox — spurred on by activist investor Carl Icahn, who owned a 10.6% stake — approached Hewlett-Packard’s board with an offer to merge the two companies. Hewlett-Packard rejected it, and Xerox responded by announcing plans to replace Hewlett-Packard’s entire board of directors and launching a formal tender offer for Hewlett-Packard’s shares. Pandemic-affected market conditions proved unfavorable for the deal, and Xerox agreed to cease pursuing it in March 2020.

In 2018, tech giant Broadcom unsuccessfully made a hostile bid for semiconductor supplier Qualcomm. After attempting to nominate 11 directors to Qualcomm’s board, Broadcom raised its offer from roughly $100 billion to $121 billion and cut the number of board seats it was trying to win to six. But security concerns raised by U.S. regulators and the possibility of interference from Broadcom’s competition, including Intel, led Broadcom to eventually withdraw.

That isn’t to suggest hostile tech takeovers are a forgone failure. In 2003, Oracle announced a takeover attempt of HR software vendor PeopleSoft in an all-cash deal valued at $5.3 billion. Oracle succeeded at a higher bid price, overcoming 18 months of back-and-forth and a court battle over PeopleSoft’s shareholder provisions.

The downsides of hostile takeovers

The high failure rate isn’t the only factor dissuading hostile takeovers. Other potential pitfalls include tainting the deal-making track record of the hostile bidder and major expenses for the acquirer in the form of adviser and regulatory compliance fees.

Companies have also wisened up to hostile takeovers and employ a range of defenses to protect their management’s decision-making power. For example, they can repurchase stock from shareholders or implement a “poison pill,” which considerably dilutes an acquirer’s voting shares in the target company. Or, they can establish a “staggered board,” in which only a certain number of directors is reelected annually.

A note about poison pills, for those curious. As this Biryuk Law blog post helpfully explains, there are three main kinds: a flip-in, a “dead hand,” and a “no hand.” With a flip-in poison pill, shareholders can force a pill redemption by a vote if the hostile offer is all cash for all of the target’s shares. A dead hand pill creates a continuing board of directors, while a no hand pill prohibits the redemption of the pill within a certain period.

Other anti-takeover measures include changing contractual terms to make the target’s agreements with third parties burdensome; saddling the acquirer with debt; and requiring a supermajority shareholder vote for M&A activity. The drawback of these — some of which require shareholder approval — is that they might deter friendly acquisitions. (That’s partially why poison pills, once common in the 1980s and 1990s, fell out of favor in the 2000s.) But many companies consider the risk worthwhile. In March 2020 alone, 57 public companies adopted poison pills in response to an activist threat or as a preventive measure; Yahoo and Netflix are among those who’ve in recent years used poison pills. (Full disclosure: Yahoo is the parent company of TechCrunch.)

Tech giants commonly employ protectionist share structures as an added defense. Facebook is a prime example — the company has a “dual class” structure designed to maximize the voting power of CEO Mark Zuckerberg and just a small group of insiders. Twitter is an anomaly in that it only has only one class of shares, but its board retains the right to issue preferred stock, which could come with special voting rights and other privileges. (The Wall Street Journal reported this week that Twitter is weighing adopting a poison pill.)

Some corporate raiders won’t be deterred, though, whether because of strategic considerations or because — as in the case of Elon Musk’s and Twitter — they believe that the target company’s management isn’t delivering on their promises. They might attempt to recruit other shareholders for their cause to improve their chances of success, or apply public pressure to a company’s board until they reconsider a bid. They could also invoke the Revlon rule, the legal principle stating that a company’s board shall make a reasonable effort to obtain the highest value for a company when a hostile takeover is imminent.

But as history has shown, hostile takeovers — even when successful — are rarely predictable.

Epic Games points to Mac’s openness and security in its latest filing in App Store antitrust case

$
0
0

In a new court filing, Epic Games challenges Apple’s position that third-party app stores would compromise the iPhone’s security. And it points to Apple’s macOS as an example of how the process of “sideloading” apps — installing apps outside of Apple’s own App Store, that is — doesn’t have to be the threat Apple describes it to be. Apple’s Mac, explains Epic, doesn’t have the same constraints as found in the iPhone operating system, iOS, and yet Apple touts the operating system used in Mac computers, macOS, as secure.

The Cary, N.C.-based Fortnite maker made these points in its latest brief, among several others, related to its ongoing legal battle with Apple over its control of the App Store.

Epic Games wants to earn the right to deliver Fortnite to iPhone users outside the App Store, or at the very least, be able to use its own payment processing system so it can stop paying Apple commissions for the ability to deliver its software to iPhone users.

A California judge ruled last September in the Epic Games v. Apple district court case that Apple did not have a monopoly in the relevant market — digital mobile gaming transactions. But the court decided Apple could not prohibit developers from adding links for alternative payments inside their apps that pointed to other ways to pay outside of Apple’s own App Store-based monetization system. While Apple largely touted the ruling as a victory, both sides appealed the decision as Epic Games wanted another shot at winning the right to distribute apps via its own games store, and Apple didn’t want to allow developers to be able to suggest other ways for their users to pay.

On Wednesday, Epic filed its Appeal Reply and Cross-Appeal Response Brief, following Apple’s appeal of the district court’s ruling.

The game maker states in the new filing that the lower court was led astry on many points by Apple, and reached the wrong conclusions. Many of its suggestions relate to how the district court interpreted the law. It also newly points to the important allies Epic now has on its side — Microsoft, the Electronic Frontier Foundation, and the attorneys general of 34 states and the District of Columbia, all of who have filed briefs supporting Epic’s case with the U.S. Court of Appeals for the Ninth Circuit.

However, one of Epic’s larger points has to do with the Mac’s security model and how it differs from the iPhone. Epic says that if Apple can allow sideloading on Mac devices and still call those computers secure, then surely it could do the same for iPhone.

“For macOS Apple relies on security measures imposed by the operating system rather than the app store, and ‘notarization’ program that scans apps and then returns them to the developer for distribution,” Epic’s new filing states. It says the lower court even agreed that Apple’s witness on the subject (Head of Software Engineering, Craig Federighi) was stretching the truth when he had disparaged macOS as having a “malware problem.

Epic then points to examples of Apple’s own marketing of its Mac computers’ security, where it touts “apps from both the App Store and the internet” can be “installed worry-free.”

Apple has argued against shifting to this same model for iPhone as it would require redesigning how its software works, among other things, including what it says would be reduced security for end users.

As app store legislation targeting tech giants has continued to move forward in Congress, Apple has been raising the alarm about being forced to open up the iPhone to third-party app stores, as the bipartisan Open App Markets Act and other international regulations would require. Apple said that mandating sideloading doesn’t comply with its pro-consumer privacy protections.

In a paper Apple published to further detail this issue, it stated that permitting sideloading could risk users’ “most sensitive and private information.”

“Supporting sideloading through direct downloads and third-party app stores would cripple the privacy and security protections that have made iPhone so secure, and expose users to serious security risks,” the paper read. Apple also pointed to Google’s Android operating system as an example of that risk, noting that, over the past four years, Android devices were found to have 15 to 47 times more malware infections than iPhone.

Timed with the release of the new filing, Epic Games CEO Tim Sweeney was interviewed by the Financial Times where he continued to berate Apple for its alleged anti-competitive behavior. Sweeney said that even if Apple fairly won the hardware market, it shouldn’t be allowed to use that position to “gain an unfair advantage over competitors and other markets,” like software.

“They should have to compete fairly against the Epic game store, and the Steam Store, and let’s assume the Microsoft Store, and the many other stores that will emerge — as they do with any other market in the world, except for digital app stores,” Sweeney said.

Epic’s Response and Reply Brief by TechCrunch on Scribd

Roe’s reversal will shake up how startups are built

$
0
0

Hello and welcome back to Equity, a podcast about the business of startups, where we unpack the numbers and nuance behind the headlines.

This is our Wednesday show, where we niche down to a single topic, think about a question and unpack the rest. This week, Natasha asked: How does Roe’s reversal impact the ways that companies are built?

The question was inspired by a recent TechCrunch+ column, “Roe reversal weighs heavily on emerging tech cities in red states.” The reporters behind the piece, Dominic-Madori Davis and Becca Szkutak, joined Equity to talk about the story and help us get more of the nuance behind this huge setback.

We chatted about the reappearance of geographic boundaries, selective silence from the money behind the money, and how founders need to rethink their growth strategy if they’re coming from red states. We also chatted about how some founders have already started to react to the overturn of Roe vs. Wade and their sentiments revolving the legality of what happens next.

Equity drops every Monday at 7 a.m. PDT and Wednesday and Friday at 6 a.m. PDT, so subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.

Commercial image-generating AI raises all sorts of thorny legal issues

$
0
0

This week, OpenAI granted users of its image-generating AI system, DALL-E 2, the right to use their generations for commercial projects, like illustrations for children’s books and art for newsletters. The move makes sense, given OpenAI’s own commercial aims — the policy change coincided with the launch of the company’s paid plans for DALL-E 2. But it raises questions about the legal implications of AI like DALL-E 2, trained on public images around the web, and their potential to infringe on existing copyrights.

DALL-E 2 “trained” on approximately 650 million image-text pairs scraped from the internet, learning from that dataset the relationships between images and the words used to describe them. But while OpenAI filtered out images for specific content (e.g. pornography and duplicates) and implemented additional filters at the API level, for example for prominent public figures, the company admits that the system can sometimes create works that include trademarked logos or characters. See:

“OpenAI will evaluate different approaches to handle potential copyright and trademark issues, which may include allowing such generations as part of ‘fair use’ or similar concepts, filtering specific types of content, and working directly with copyright [and] trademark owners on these issues,” the company wrote in an analysis published prior to DALL-E 2’s beta release on Wednesday.

It’s not just a DALL-E 2 problem. As the AI community creates open source implementations of DALL-E 2 and its predecessor, DALL-E, both free and paid services are launching atop models trained on less-carefully filtered datasets. One, Pixelz.ai, which rolled out an image-generating app this week powered by a custom DALL-E model, makes it trivially easy to create photos showing various Pokémon and Disney characters from movies like Guardians of the Galaxy and Frozen.

When contacted for comment, the Pixelz.ai team told TechCrunch that they’ve filtered the model’s training data for profanity, hate speech and “illegal activities” and block users from requesting those types of images at generation time. The company also said that it plans to add a reporting feature that will allow people to submit images that violate the terms of service to a team of human moderators. But where it concerns intellectual property (IP), Pixelz.ai leaves it to users to exercise “responsibility” in using or distributing the images they generate — grey area or no.

“We discourage copyright infringement both in the dataset and our platform’s terms of service,” the team told TechCrunch. “That being said, we provide an open text input and people will always find creative ways to abuse a platform.”

Pixelz.ai

An image of Rocket Racoon from Disney’s/Marvel’s Guardians of the Galaxy, generated by Pixelz.ai’s system. Image Credits: Pixelz.ai

Bradley J. Hulbert, a founding partner at law firm MBHB and an expert in IP law, believes that image-generating systems are problematic from a copyright perspective in several aspects. He noted that artwork that’s “demonstrably derived” from a “protected work” — i.e. a copyrighted character — has generally been found by the courts to be infringing, even if additional elements were added. (Think an image of a Disney princess walking through a gritty New York neighborhood.) In order to be shielded from copyright claims, the work must be “transformative” — in other words, changed to such a degree that the IP isn’t recognizable.

“If a Disney princess is recognizable in an image generated by DALL-E 2, we can safely assume that The Walt Disney Co. will likely assert that the DALL-E 2 image is a derivative work and an infringement of its copyrights on the Disney princess likeness,” Hulbert told TechCrunch via email. “A substantial transformation is also a factor considered when determining whether a copy constitutes ‘fair use.’ But, again, to the extent a Disney princess is recognizable in a later work, assume that Disney will assert later work is a copyright infringement.”

Of course, the battle between IP holders and alleged infringers is hardly new, and the internet has merely acted as an accelerant. In 2020, Warner Bros. Entertainment, which owns the right to film depictions of the Harry Potter universe, had certain fan art removed from social media platforms including Instagram and Etsy. A year earlier, Disney and Lucasfilm petitioned Giphy to take down GIFs of “Baby Yoda.”

But image-generating AI threatens to vastly scale the problem by lowering the barrier to entry. The plights of large corporations aren’t likely to garner sympathy (nor should they), and their efforts to enforce IP often backfire in the court of public opinion. On the other hand, AI-generated artwork that infringes on, say, an independent artist’s characters could threaten a livelihood.

The other thorny legal issue around systems like DALL-E 2 pertains to the content of their training datasets. Did companies like OpenAI violate IP law by using copyrighted images and artwork to develop their system? It’s a question that’s already been raised in the context of Copilot, the commercial code-generating tool developed jointly by OpenAI and GitHub. But unlike Copilot, which was trained on code that GitHub might have the right to use for the purpose under its terms of service (according to one legal analysis), systems like DALL-E 2 source images from countless public websites.

As Dave Gershgorn points out in a recent feature for The Verge, there isn’t a direct legal precedent in the U.S. that upholds publicly available training data as fair use.

One potentially relevant case involves a Lithuanian company called Planner 5D. In 2020, the firm sued Meta (then Facebook) for reportedly stealing thousands of files from Planner 5D’s software, which were made available through a partnership with Princeton to contestants of Meta’s 2019 Scene Understanding and Modeling challenge for computer vision researchers. Planner 5D claimed Princeton, Meta and Oculus, Meta’s VR-focused hardware and software division, could have benefited commercially from the training data that was taken from it.

The case isn’t scheduled to go to trial until March 2023. But last April, the U.S. district judge overseeing the case denied motions by then-Facebook and Princeton to dismiss Planner 5G’s allegations.

Unsurprisingly, rightsholders aren’t swayed by the fair use argument. A spokesperson for Getty Images told IEEE Spectrum in an article that there are “big questions” to be answered about “the rights to the imagery and the people, places, and objects within the imagery that [models like DALL-E 2] were trained on.” Association of Illustrators CEO Rachel Hill, who was also quoted in the piece, brought up the issue of compensation for images in training data.

Hulbert believes it’s unlikely a judge will see the copies of copyrighted works in training datasets as fair use — at least in the case of commercial systems like DALL-E 2. He doesn’t think it’s out of the question that IP holders could come after companies like OpenAI at some point and demand that they license the images used to train their systems.

“The copies … constitute infringement of the copyrights of the original authors. And infringers are liable to the copyright owners for damages,” he added. “[If] DALL-E (or DALL-E 2) and its partners make a copy of a protected work, and the copy was neither approved by the copyright owner nor fair use, the copying constitutes copyright infringement.”

Interestingly, the U.K. is exploring legislation that would remove the current requirement that systems trained through text and data mining, like DALL-E 2, be used strictly for non-commercial purposes. While copyright holders could still ask for payment under the proposed regime by putting their works behind a paywall, it would make the U.K.’s policy one of the most liberal in the world.

The U.S. seems unlikely to follow suit, given the lobbying power of IP holders in the U.S. The issue seems likely to play out in a future lawsuit instead. But time will tell.


Harvey, which uses AI to answer legal questions, lands cash from OpenAI

$
0
0

Harvey, a startup building what it describes as a “copilot for lawyers,” today emerged from stealth with $5 million in funding led by the OpenAI Startup Fund, the tranche through which OpenAI and its partners are investing in early-stage AI companies tackling major problems. Also participating in the round was Jeff Dean, the lead of Google AI, and Mixer Labs co-founder Elad Gil, among other angel backers.

Harvey was founded by Winston Weinberg, a former securities and antitrust litigator at law firm O’Melveny & Myers, and Gabriel Pereyra, previously a research scientist at DeepMind, Google Brain (another of Google’s AI groups) and Meta AI. Weinberg and Pereyra are roommates — Pereyra showed Weinberg OpenAI’s GPT-3 text-generating system and Weinberg realized that it could be used to improve legal workflows.

“Our product provides lawyers with a natural language interface for their existing legal workflows,” Pereyra told TechCrunch in an email interview. “Instead of manually editing legal documents or performing legal research, Harvey enables lawyers to describe the task they wish to accomplish in simple instructions and receive the generated result. To enable this, Harvey leverages large language models to both understand users’ intent and to generate the correct output.”

More concretely, Harvey can answer questions asked in natural language like, “Tell me what the differences are between an employee and independent contractor in the Fourth Circuit,” and “Tell me if this clause in a lease is in violation of California law, and if so, rewrite it so it is no longer in violation.” On first read, it almost seems as though Harvey could replace lawyers, generating legal arguments and filing drafts at a moment’s notice. But Pereyra insists that this isn’t the case.

“We want Harvey to serve as an intermediary between tech and lawyer, as a natural language interface to the law,” he said. “Harvey will make lawyers more efficient, allowing them to produce higher quality work and spend more time on the high value parts of their job. Harvey provides a unified and intuitive interface for all legal workflows, allowing lawyers to describe tasks in plain English instead of using a suite of complex and specialized tools for niche tasks.”

It’s powerful stuff in theory. But it’s also fraught. Given the highly sensitive nature of most legal disputes, lawyers and law firms might be reluctant to give a tool like Harvey access to any case documents. There’s also the matter of language models’ proclivity to spout toxicity and made-up facts, which would be particularly poorly received — if not perjurious — in the court of law.

That’s why Harvey, which is currently in beta, has a disclaimer attached to it: The tool isn’t meant to provide legal advice to nonlawyers and should be used under the supervision of licensed attorneys.

On the data privacy issue, Pereyra says that Harvey takes pains to meet clients’ compliance needs, anonymizing user data and deleting data after a predetermined amount of time. Users can delete data at any time on request, he says, and take comfort in the fact that Harvey doesn’t “cross-contaminate” data between clients.

It’s early days. But already Pereyra says that Harvey is being used “by users across the legal landscape,” ranging from law firms to legal aid organizations.

It faces some competition. Casetext uses AI, primarily GPT-3, to find legal cases and assist with general legal research tasks and brief drafting. More surgical tools like Klarity use AI to strip drudgery from contract review. At one point in time, startup Augrented was even exploring ways to leverage GPT-3 to summarize legal notices or other sources in plain English to help tenants defend their rights.

For one, Brad Lightcap, OpenAI’s CCO and the manager of the OpenAI Startup Fund, believes Harvey’s sufficiently differentiated. It’ll also benefit from the relationship with OpenAI; OpenAI Startup Fund participants receive early access to new OpenAI systems and Azure resources from Microsoft in addition to capital.

“We believe Harvey will have a transformative impact on our legal system, empowering lawyers to provide higher quality legal services more efficiently to more clients,” Lightcap said via email. “We started the OpenAI Startup Fund to support companies using powerful AI to drive societal level impact, and Harvey’s vision for how AI can increase access to legal services and improve outcomes fits squarely within our mission.”

Harvey has a five-person team, and Pereyra expects that number to grow to five to ten employees by the end of the year. He wouldn’t answer when asked about revenue figures.

Harvey, which uses AI to answer legal questions, lands cash from OpenAI by Kyle Wiggers originally published on TechCrunch

The current legal cases against generative AI are just the beginning

$
0
0

As generative AI enters the mainstream, each new day brings a new lawsuit.

Microsoft, GitHub and OpenAI are currently being sued in a class action motion that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit.

Two companies behind popular AI art tools, Midjourney and Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images.

And just last week, stock image supplier Getty Images took Stability AI to court for reportedly using millions of images from its site without permission to train Stable Diffusion, an art-generating AI.

At issue, mainly, is generative AI’s tendency to replicate images, text and more — including copyrighted content — from the data that was used to train it. In a recent example, an AI tool used by CNET to write explanatory articles was found to have plagiarized articles written by humans — articles presumably swept up in its training dataset. Meanwhile, an academic study published in December found that image-generating AI models like DALL-E 2 and Stable Diffusion can and do replicate aspects of images from their training data.

The generative AI space remains healthy — it raised $1.3 billion in venture funding through November 2022, according to PitchBook, up 15% from the year prior. But the legal questions are beginning to affect business.

Some image-hosting platforms have banned AI-generated content for fear of legal blowback. And several legal experts have cautioned generative AI tools could put companies at risk if they were to unwittingly incorporate copyrighted content generated by the tools into any of products they sell.

“Unfortunately, I expect a flood of litigation for almost all generative AI products,” Heather Meeker, a legal expert on open source software licensing and a general partner at OSS Capital, told TechCrunch via email. “The copyright law needs to be clarified.”

Content creators such as Polish artist Greg Rutkowski, known for creating fantasy landscapes, have become the face of campaigns protesting the treatment of artists by generative AI startups. Rutkowski has complained about the fact that typing text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks very similar to his original work — threatening his income.

Given generative AI isn’t going anywhere, what comes next? Which legal cases have merit and what court battles lie on the horizon?

Eliana Torres, an intellectual property attorney with Nixon Peabody, says that the allegations of the class action suit against Stability AI, Midjourney, and DeviantArt will be challenging to prove in court. In particular, she thinks it’ll be difficult to ascertain which images were used to train the AI systems because the art the systems generate won’t necessarily look exactly like any of the training images.

State-of-the-art image-generating systems like Stable Diffusion are what’s known as “diffusion” models. Diffusion models learn to create images from text prompts (e.g. “a sketch of a bird perched on a windowsill”) as they work their way through massive training datasets. The models are trained to “re-create” images as opposed to drawing them from scratch, starting with pure noise and refining the image over time to make it incrementally closer to the text prompt.

Perfect recreations don’t occur often, to Torres’ point. As for images in the style of a particular artist, style has proven nearly impossible to shield with copyright.

“It will … be challenging to get a general acceptance of the definition of ‘in style of’ as ‘a work that others would accept as a work created by that artist whose style was called upon,’ which is mentioned in the complaint [i.e. against Stability AI et al.],” Torres told TechCrunch in an email interview. 

Torres also believes the suit should be directed not at the creators of these AI systems, but at the party responsible for compiling the images used to train them: Large-scale Artificial Intelligence Open Network (LAION), a nonprofit organization. Midjourney, DeviantArt and Stability AI use training data from LAION’s datasets, which span billions of images from around the web.

“If LAION created the dataset, then the alleged infringement occurred at that point, not once the dataset was used to train the models,” Torres said. “It’s the same way a human can walk into a gallery and look at paintings but is not allowed to take photos.”

Companies like Stability AI and OpenAI, the company behind ChatGPT, have long claimed that “fair use” protects them in the event that their systems were trained on licensed content. This doctrine enshrined in U.S. law permits limited use of copyrighted material without first having to obtain permission from the rightsholder.

Supporters point to cases like Authors Guild v. Google, in which the New York-based U.S. Court of Appeals for the Second Circuit ruled that Google manually scanning millions of copyrighted books without a license to create its book search project was fair use. What constitutes fair use is constantly being challenged and revised, but in the generative AI realm, it’s an especially untested theory.

A recent article in Bloomberg Law asserts that the success of a fair use defense will depend on whether the works generated by the AI are considered transformative — in other words, whether they use the copyrighted works in a way that significantly varies from the originals. Previous case law, particularly the Supreme Court’s 2021 Google v. Oracle decision, suggests that using collected data to create new works can be transformative. In that case, Google’s use of portions of Java SE code to create its Android operating system was found to be fair use.

Interestingly, other countries have signaled a move toward more permissive use of publicly available content — copyrighted or not. For example, the U.K. is planning to tweak an existing law to allow text and data mining “for any purpose,” moving the balance of power away from rightsholders and heavily toward businesses and other commercial entities. There’s been no appetite to embrace such a shift in the U.S., however, and Torres doesn’t expect that to change anytime soon — if ever.

The Getty case is slightly more nuanced. Getty — which Torres notes hasn’t yet filed a formal complaint — must show damages and connect any infringement it alleges to specific images. But Getty’s statement mentions that it has no interest in financial damages and is merely looking for a “new legal status quo.” 

Andrew Burt, one of the founders of AI-focused law firm BNH.ai, disagrees with Torres to the extent that he believes generative AI lawsuits focused on intellectual property issues will be “relatively straightforward.” In his view, if copyrighted data was used to train AI systems — whether because of intellectual property or privacy restrictions — those systems should and will be subject to fines or other penalties.

Burt noted that the Federal Trade Commission (FTC) is already pursuing this path with what it calls “algorithmic disgorgement,” where it forces tech firms to kill problematic algorithms along with any ill-gotten data that they used to train them. In a recent example, the FTC used the remedy of algorithmic disgorgement to force Everalbum, the maker of a now-defunct mobile app called Ever, to delete facial recognition algorithms the company developed using content uploaded by people who used its app. (Everalbum didn’t make it clear that the users’ data was being used for this purpose.)

“I would expect generative AI systems to be no different from traditional AI systems in this way,” Burt said.

What are companies to do, then, in the absence of precedent and guidance? Torres and Burt concur that there’s no obvious answer.

For her part, Torres recommends looking closely at the terms of use for each commercial generative AI system. She notes that Midjourney has different rights for paid versus unpaid users, while OpenAI’s DALL-E assigns rights around generated art to users while also warning them of “similar content” and encouraging due diligence to avoid infringement.

“Businesses should be aware of the terms of use and do their due diligence, such as using reverse image searches of the generated work intended to be used commercially,” she added.

Burt recommends that companies adopt risk management frameworks such as the AI Risk Management Framework released by National Institute of Standards and Technology, which gives guidance on how to address and mitigate risks in the design and use of AI systems. He also suggests that companies continuously test and monitor their systems for potential legal liabilities.

“While generative AI systems make AI risk management harder — it is, to be fair, much more straightforward to monitor an AI system that makes binary predictions for risks — there are concrete actions that can be taken,” Burt said.

Some firms, under pressure from activists and content creators, have taken steps in the right direction. Stability AI plans to allow artists to opt out of the dataset used to train the next-generation Stable Diffusion model. Through the website HaveIBeenTrained.com, rightsholders will be able to request opt-outs before training begins in a few weeks’ time. Rival OpenAI offers no such opt-out mechanism, but the firm has partnered with organizations like Shutterstock to license portions of their image galleries.

For Copilot, GitHub introduced a filter that checks code suggestions with their surrounding code of about 150 characters against public GitHub code and hides suggestions if there’s a match or “near match.” It’s an imperfect measure — enabling the filter can cause Copilot to omit key pieces of attribution and license text — but GitHub has said that it plans to introduce additional features in 2023 aimed at helping developers make informed decisions about whether to use Copilot’s suggestions.

Taking the ten-thousand-foot view, Burt believes that generative AI is being deployed more and more without an understanding of how to address its dangers. He praises efforts to combat the obvious problems, like copyrighted works being used to train content generators. But he cautions that the opacity of the systems will put pressure on businesses to prevent the systems from wreaking havoc — and having a plan to address the systems’ risks before they’re put into the wild.

“Generative AI models are among the most exciting and novel uses of AI — with the clear potential to transform the ‘knowledge economy’,” he said. “Just as with AI in many other areas, the technology is largely there and ready for use. What isn’t yet mature are the ways to manage all of its risks. Without thoughtful, mature evaluation and management of these systems’ harms, we risk deploying a technology before we understand how to stop it from causing damage.”

Meeker is more pessimistic, arguing that not all businesses — regardless of the mitigations they undertake — will be able to shoulder the legal costs associated with generative AI. This points to the urgent need for clarification or changes in copyright law, she says.

“If AI developers don’t know what data they can use to train models, the technology could be set back by years,” Meeker said. “In a sense, there is nothing they can do, because if businesses are unable to lawfully train models on freely available materials, they won’t have enough data to train the models. There are only various long-term solutions like opt-in or opt-out models, or systems that aggregate royalties for payment to all authors … The suits against AI businesses for ingesting copyrightable material to train models are potentially crippling to the industry, [and] could cause consolidation that would limit innovation.”

The current legal cases against generative AI are just the beginning by Kyle Wiggers originally published on TechCrunch

3 tips for crypto startups preparing for continued compliance

$
0
0

Between the decline in cryptocurrency prices and the bankruptcy of several large players in the industry, today’s cryptocurrency companies face no shortage of challenges. However, cryptocurrency companies should not lose sight of their day-to-day obligations, particularly those concerning compliance.

In fact, both state and federal regulators continue to bring enforcement actions against cryptocurrency companies over alleged compliance deficiencies, resulting in substantial monetary penalties and, in extreme cases, even arrest of the companies’ founders.

The risk posed by inadequate compliance shows no signs of abating. Early-stage cryptocurrency companies can lay a foundation for future success by continually assessing their compliance obligations through a risk-based approach and quickly addressing any deficiencies, particularly during periods of rapid expansion, as well as by vigilantly monitoring for new regulatory developments.

It is no secret that cryptocurrency regulation remains complicated, with several government regulators adopting differing and sometimes competing approaches.

1. Assess your business’s compliance risk and build a well-resourced compliance function

Cryptocurrency companies of all shapes and sizes would benefit from undertaking a dispassionate assessment of the compliance risks facing the company. The Financial Action Task Force (FATF), an independent, intergovernmental body that publishes global anti-money laundering compliance standards for both companies and governments, recommends that financial institutions, including cryptocurrency companies, adopt a risk-based approach to compliance.

This approach involves considering a company’s products, services, business model, customers, geography and other factors in order to assess, and then address, the greatest risks to the company. As a company evolves and grows over time, these risks should be continually reevaluated to ensure the company stays ahead of any developing compliance risks.

Cryptocurrency companies are often regulated by an alphabet soup of government entities. Some of the most common and well-known regulations include, for example:

  • Registration and licensure requirements. Cryptocurrency companies are frequently required to register with various government regulators in order to operate, although companies may not always immediately recognize the requirement. For example, many cryptocurrency exchanges or ATMs are required to register as money services businesses with the U.S. Department of the Treasury’s Financial Crimes Enforcement Network. Similarly, the New York State Department of Financial Services (NYSDFS) requires cryptocurrency companies to obtain a “bit license” if they conduct business in New York or with New York residents, which will likely include many companies that are not physically based in New York.
  • Anti-money laundering and know your customer regulations. Many cryptocurrency companies must comply with Know Your Customer (KYC) regulations, which require these companies to collect substantial information regarding their customers during the onboarding process. Anti-money laundering (AML) laws also require that companies monitor transactions and report potentially suspicious activity. Together, these laws are designed to combat criminal activity and terrorist financing, as well as prevent transactions with sanctioned entities and individuals. Although these laws are widely known, in practice compliance can prove difficult, and cryptocurrency companies continue to be cited for alleged AML/KYC compliance failures.

    3 tips for crypto startups preparing for continued compliance by Jenna Routenberg originally published on TechCrunch

Ask Sophie: How do we transfer H-1Bs and green cards to our startup?

$
0
0

Here’s another edition of “Ask Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

TechCrunch+ members receive access to weekly “Ask Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie,

I was recently laid off. I’m co-founding a cleantech startup with two of my former colleagues, who were also laid off. Both of my co-founders are on H-1Bs and had green cards in the works with our former company. I’m a U.S. citizen.

What do we need to do to transfer their H-1Bs and green cards to our startup? Based on your experience, do investors care about the amount of money a startup spends on visas and green cards for their founders?

— First-time Founder

Dear First-time,

Congrats to you and your co-founders on dreaming big and taking the leap to create your own startup! I appreciate your dedication to the environment, your tenacity, and your spirit of innovation.

Let me take your second question first. Based on my experience, the majority of U.S. investors who invest in my international founder clients tend to be interested in whether the startups have an innovative idea with some initial traction, a strong founding team and are structured as a Delaware C-corporation. Many investors I’ve worked with have been very supportive of immigration efforts that keep founding teams and key talent together in the United States to build and scale their startups, even if that means paying higher wages than typical for founders in the startup market to ensure compliance with various immigration requirements.

A composite image of immigration law attorney Sophie Alcorn in front of a background with a TechCrunch logo.

Image Credits: Joanna Buniak / Sophie Alcorn (opens in a new window)

That said, you can broaden your funding sources by considering grants, particularly since your focus is cleantech. The big benefit of grants is that they are non-dilutive capital. And they don’t require repayment like a loan. You have a contract with deliverables that you as startup founders define.

What’s more, grants and other funding can help your co-founders qualify for an EB-1A extraordinary ability green card, which I’ll discuss in more detail in a bit. These funds can also be used to pay your co-founders’ legal and filing fees for their H-1Bs as well as their H-1B salaries.

Now let me dive into your initial question, starting with H-1B transfers.

H-1B Transfers

As you and your co-founders know, they have a 60-day grace period from their last day of employment in their former H-1B role until they have to leave the U.S. or apply for another status. Transferring your co-founders’ H-1Bs to your startup is definitely possible, but you’ll want to start immediately. It’s important to take the steps necessary to qualify your startup for sponsoring the H-1Bs before proceeding with the transfer. And it’s important to take those steps quickly since the 60-day grace period for your co-founders is already counting down.

Ask Sophie: How do we transfer H-1Bs and green cards to our startup? by Jenna Routenberg originally published on TechCrunch

Apple wins antitrust court battle with Epic Games, appeals court rules

$
0
0

Apple has won its antitrust-focused appeals court battle with Fortnite maker Epic Games over its App Store policies, according to the opinion issued today by the U.S. Ninth Circuit Court of Appeals. The court largely upheld the district court’s earlier ruling related to Epic Games’ antitrust claims in favor of Apple, but it also upheld the lower court’s judgment in favor of Epic under California’s Unfair Competition Law.

The mobile game maker had hoped to prove in its appeal that Apple had acted unlawfully by restricting app distribution on iOS devices to Apple’s App Store which required payments to go through its own processor while preventing developers from communicating to customers about alternative ways to pay.

The court’s ruling was first reported by Bloomberg.

Apple has issued the following statement:

Today’s decision reaffirms Apple’s resounding victory in this case, with nine of ten claims having been decided in Apple’s favor. For the second time in two years, a federal court has ruled that Apple abides by antitrust laws at the state and federal levels. The App Store continues to promote competition, drive innovation, and expand opportunity, and we’re proud of its profound contributions to both users and developers around the world. We respectfully disagree with the court’s ruling on the one remaining claim under state law and are considering further review.

The ruling is a major setback for Epic Games and other developers who hoped the ruling could set precedent for further antitrust claims and require Apple to open iOS devices to third-party app stores and payment systems.

Epic originally sued Apple in 2020, after forcing Apple to remove Fortnite from the App Store after it intentionally violated the App Store terms over in-app purchases. Though Apple had largely won the lawsuit when the judge declared Apple was not acting as a monopolist, the court sided with the Fortnite maker on the matter of Apple’s anti-steering policies regarding restrictions on in-app purchases. It said that Apple would no longer be able to prohibit developers from pointing users to other means of payment.

Both Apple and Epic appealed the ruling — Apple over the required changes to App Store policies related to external links and Epic to try its antitrust case again.

In today’s decision, the appeals court panel affirmed the district court’s denial of antitrust liability and its corresponding rejection of Epic’s illegality defense to Apple’s breach of contract counter-claim, the ruling said. However, it also noted that the district court had erred in defining the relevant antitrust market and in holding that Apple’s DPLA (Developer Program Licensing Agreement) fell outside of the scope of the antitrust law known as the Sherman Act.

But it said those errors were ultimately “harmless” and that Epic, regardless, had “failed to establish, as a factual matter, its proposed market definition and the existence of any substantially less restrictive alternative means for Apple to accomplish the procompetitive justifications supporting iOS’s walled- garden ecosystem.”

In other words, while these types of contracts can be within the scope of a Sherman Act claim, that wasn’t relevant to the court’s decision in this case.

The panel also upheld the district court’s ruling in favor of Epic Games within the scope of California’s Unfair Competition Law.

“The district court did not clearly err in finding that Epic was injured, err as a matter of law when applying California’s flexible liability standards, or abuse its discretion when fashioning equitable relief,” the ruling stated.

That would mean the anti-steering changes the district court previously decided on would once again be required.

Apple hasn’t yet issued an appeal for this part of the decision. It will likely weigh its options before making that determination.

In another bright spot for Apple, the appeals court ruled that the district court had erred when it ruled that Apple wasn’t entitled to attorney fees related to the DPLA breach of contract claims.

Epic Games responded to a request for comment by pointing to founder and CEO Tim Sweeney’s statement, shared on Twitter.

“Apple prevailed at the 9th Circuit Court,” Sweeney wrote. “Though the court upheld the ruling that Apple’s restraints have ‘a substantial anticompetitive effect that harms consumers,’ they found we didn’t prove our Sherman Act case.Fortunately, the court’s positive decision rejecting Apple’s anti-steering provisions frees iOS developers to send consumers to the web to do business with them directly there. We’re working on next steps.”

Updated 4/23/23, 4:35 pm et with Epic Games’ comment

Apple wins antitrust court battle with Epic Games, appeals court rules by Sarah Perez originally published on TechCrunch

No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked

$
0
0

Few lawyers would be foolish enough to let an AI make their arguments, but one already did, and Judge Brantley Starr is taking steps to ensure that debacle isn’t repeated in his courtroom.

The Texas federal judge has added a requirement that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”

Last week, attorney Steven Schwartz allowed ChatGPT to “supplement” his legal research in a recent federal filing, providing him with six cases and relevant precedent — all of which were completely hallucinated by the language model. He now “greatly regrets” doing this, and while the national coverage of this gaffe probably caused any other lawyers thinking of trying it to think again, Judge Starr isn’t taking any chances.

At the federal site for Texas’ Northern District, Starr has, like other judges, the opportunity to set specific rules for his courtroom. And added recently (though it’s unclear whether this was in response to the aforementioned filing) is the “Mandatory Certification Regarding Generative Artificial Intelligence.” Eugene Volokh first reported the news.

All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.

A form for lawyers to sign is appended, noting that “quotations, citations, paraphrased assertions, and legal analysis” are all covered by this proscription. As summary is one of AI’s strong suits, and finding and summarizing precedent or previous cases is something that has been advertised as potentially helpful in legal work, this may end up coming into play more often than expected.

Whoever drafted the memorandum on this matter at Judge Starr’s office has their finger on the pulse. The certification requirement includes a pretty well informed and convincing explanation of its necessity (line breaks added for readability):

These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why.

These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.

As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.

In other words, be prepared to justify yourself.

While this is just one judge in one court, it would not be surprising if others took up this rule as their own. While as the court says, this is a powerful and potentially helpful technology, its use must be at the very least clearly declared and checked for accuracy.

IP for startups: It starts with strategy

$
0
0

Intellectual property can be a powerful weapon in your startup’s arsenal. It can protect you from competitors using your tech, and it can drastically improve how valuable your company is: If your IP is stopping a big company from doing what it wants, that could, in itself, be a good enough reason for acquiring you.

In this new series, we are talking to Michele Moreland, who is a general partner at Aventurine, which is taking an IP-first approach to investing. Michele has been at the cutting edge of IP strategy throughout her career and has been responsible for $3 billion in patent verdicts as a portfolio strategist. As a trial lawyer, Michele represented some of the most important tech companies of our time, including Qualcomm, Amgen and Nvidia.

So, what’s IP? Well, it refers to “creations of the mind,” such as inventions, literary and artistic works, designs, symbols, names, and images used in business. Some IP is automatic (e.g., this article is automatically covered by copyright because I wrote it), and other IP — such as trademarks and patents — need to be protected more actively.

In this series, we take a deep dive into the various types of IP — including patents, copyrights, trademarks and trade secrets — seen through the lens of early-stage startups. As a startup founder, what do you need to think about — where and when, and how much will it cost — when protecting the IP your company is creating?

Start with the “why”

“Oftentimes, people think they just need to get the patent, because it checks a box for VCs,” Moreland said. “But if you really want IP to be a scaffold for the business and potentially create value, and maybe offer support in the context of a future exit, you need to take a broader view.”

The considerations are manyfold, but it starts with thinking about where your company is in the market and the space you take up vis-à-vis your competitors. This includes thinking about your company’s geographic location and that of your customers and potential acquirers. You need to think about the types of IP that may support your business and the people who need to be involved in the strategy and execution of your intellectual property approach.

“I think there are certain people in the company that get left out of the mix. That may be a mistake. From my litigation experience, I’ve seen that outcomes may have been different if certain marketing people had been part of early conversation about IP,” Moreland said. “Starting from the 100,000-foot view, the conversation starts with ‘Where are we?’ and ‘Where do we want to go?’”


Backed by Gradient, Fileread uses large language models to make legal discovery more efficient

$
0
0

Legal discovery is one of the most time-consuming parts of litigation and typically involves team of specialists combing through towers of documents. Fileread, a startup that uses large language learning models (LLMs) to create tools for faster and more efficient discovery, announced today it has raised $6 million in seed funding.

The round was led by Gradient Ventures, Google’s AI-focused fund, with participation from Soma Capital.

Fileread’s tools are meant to increase the chances of crucial information being found in the discovery process, at a faster speed. Co-founder Chan Koh told TechCrunch that while he was studying engineering at Caltech, his parents lost his childhood home during the housing crisis of 2008 and did not understand the law well enough to find relief.

“Witnessing my parents grapple with the shock of losing something they’d worked so hard to attain was incredibly painful,” he said. “After graduating, I was motivated to build something that could have aided my parents and others in similar situations.”

Fileread was founded in 2020 shortly after its team, led by Koh and co-founder and co-CEO Daniel Hu, began collaborating with the Deliberate Democracy Lab of Stanford University to analyze their deliberations. Freya Zhou joined then as COO and co-founder, and Fileread built its first LLM platform. This made them realize the power LLMs have in finding the right passages from enormous amounts of text and that legal discovery had similar problems to deliberations, but at much larger scale.

Fileread founders Chan Koh, Freya Zhou and Daniel Hu

Fileread founders Chan Koh, Freya Zhou and Daniel Hu

For example, Fileread is currently being used on a case with more than a million documents, with only a team of 40 to 50 specialist reviewers. Fileread can help them save them by answering time-consuming queries. Users can ask Fileread anything related to the content of the documents uploaded to its platform. For example, if they ask “who was involved in the transactions,” Fileread returns a list of all possible answers highlighted in the original document.

Legal teams can safeguard against wrong answers because Fileread provides citations to each answer from its LLMs, which direct users to the original sources of truth that generated the LLM response in the first place.

Some other startups in the legal space include Casetext and Harvey. Koh said Fileread differentiates from Casetext because Casetext’s primary focus is on case research instead of discovery. Meanwhile, Harvey is focused on serving the broader legal services market.

Fileread’s new funding will be used on hiring, scaling its product and finding new ways to use LLMs for legal applications.

ConfiAbogado wants to put a tech touch on Latin America’s legal system

$
0
0

ConfiAbogado, a Mexico City-based startup, raised $1.65 million in seed funding to provide better accessibility to legal services across Latin America.

Co-founder and CEO Emiliano Ruiz launched the company with his brother, Julián Ruiz, in 2020. Emiliano previously worked at a traditional law firm, while Julián’s background is as an analyst and processes coordinator.

It was while at the law firm that Emiliano Ruiz noticed that many of the firm’s clients were big corporations and high net worth individuals because of the cost involved, noting to TechCrunch via email that legal services being expensive is why only one in 11 individuals in Latin America seek legal council when they need it.

“I wanted to find a way to make legal help more efficient and accessible to reach more clients,” Ruiz said. “We started to reach out to law offices about acquiring our technology, but they were reluctant to change their old ways, so we focus on providing value, not to the lawyers, but to the people with legal needs in a novel manner combining our background of law and automation.”

ConfiAbogado focuses on providing legal solutions on volume rather than high costs. Here’s how it works: Clients enter information about their case into the ConfiAbogado system, and the company’s proprietary technology taps into a network of validated lawyers, and over 100,000 legal possibilities, to produce the best strategy and automatically creates the necessary legal documents. All in about 20 minutes. Clients are then kept informed on the status of their cases.

The company provides legal services in the areas of civil, commercial, labor and family litigation. Since its launch two years ago, it has grown its client base to more than 250 new clients each month. In the past year, revenue grew over 10x.

“ConfiAbogado offers to finance the litigation cost to its customers until the case is won and then split the wins to make legal aid accessible to everyone,” Ruiz said. “This solves the problem for regular people of not having enough money to hire lawyers as well as forces us to be more efficient and look for the best way to get a good result for a client since both interests are aiming to the same place.”

Meanwhile, Tuesday Capital led ConfiAbogado’s seed round and was joined by a group of investors that includes DTB Capital, Seedstars International Ventures, 500 Global, Invariantes, Goodwater, GAIN Capital and Side Door Ventures.

The new funding will support ConfiAbogado’s expansion to new verticals and cities, with a plan to more than triple its current geographic footprint, and to develop new AI tools. In addition, just 8% of people with a legal problem in Latin America hire an attorney, and one of Ruiz’s goals is to double that in the next seven years.

“We are focused on creating the best option for our customers when it comes to solving a legal problem,” Ruiz said. “ConfiAbogado will not only give legal solutions but allow persons to lay back and relax while the whole process happens, we call this extraordinary legal care for ordinary people.”

Darrow raises $35M for an AI that parses public documents for class action lawsuit potential

$
0
0
The U.S. is famous (or infamous) for its litigiousness: The country may not have the highest per capita amount of lawsuits (that’s Germany), but it has the most of any country overall amid a very active legal industry whose caseload is growing in a market that is worth many tens of billions of dollars. Now, […]

Eve launches to bring LLMs to the legal profession

$
0
0

In 2020, Jay Madheswaran, Matt Noe and David Zeng, all veterans of the tech industry, had a vision to harness the power of large language models (à la OpenAI’s ChatGPT) to shake up the legal profession. Their goal was to create a platform that’d enable lawyers to be more productive by abstracting away processes around […]

© 2023 TechCrunch. All rights reserved. For personal use only.

Apple excludes video and news partners from new App Store rules around external payments

$
0
0

Apple this week updated its App Store rules to comply with a court order after the Supreme Court declined to hear the Epic Games-initiated antitrust case against Apple over commissions. As a result, developers can now promote alternative means to pay for their in-app purchases and subscriptions via links or buttons inside their iOS apps. […]

© 2024 TechCrunch. All rights reserved. For personal use only.





Latest Images