Insights Search
The Department of Justice is stepping up its focus on artificial intelligence (“AI”), with officials warning that harsher penalties could be in store for those who deliberately misuse the technology to commit white collar offenses, such as price fixing and market manipulation.
On February 26, 2024, the United States Supreme Court is set to hear oral argument in two cases currently before the Court, Moody v. NetChoice1 and NetChoice v. Paxton.2 At their core, these cases raise the question as to whether the First Amendment prohibits state laws restricting “social media platforms” from engaging in content moderation and making editorial choices about whether and how to publish speech on their platforms. In addition to resolving a split between the Fifth and Eleventh Circuits on the issue, the Supreme Court’s decision could impact content moderation and regulation beyond social media sites, including on generative artificial intelligence (“AI”) platforms.
Recent headlines have been dominated by rapid developments in generative artificial intelligence, and a number of startups are positioning themselves to offer new tools to the legal industry making use of this groundbreaking technology.
The legal world recently learned an important lesson about the blind adoption of generative AI when two New York attorneys were sanctioned for using ChatGPT to write a brief that included entirely fabricated cases.
The U.S. Government has long made clear its desire to restrict certain outbound U.S. investments, but it was unclear whether a restriction would come through executive or legislative action.