Posts by Slashdot (old posts, page 64)

PhD Graduates Far Exceed Academic Job Openings

The number of doctoral graduates globally has grown steadily over recent decades, creating a massive imbalance between PhD holders and available academic positions. Among the 38 OECD countries, new doctorate holders almost doubled between 1998 and 2017. China's doctoral enrollment has exploded from around 300,000 students in 2013 to more than 600,000 in 2023. This growth has forced PhD graduates into non-academic careers at unprecedented rates. A 2023 study of more than 4,500 PhD graduates in the United Kingdom found over two-thirds were employed outside academia. In South Africa, 18% of more than 6,000 PhD graduates reported difficulty finding jobs related to their expertise. Some countries have begun adapting their doctoral programs. Japan, Germany and the United Kingdom now offer training and paid internships during doctoral studies, including "industrial PhD" programs where students conduct research in collaboration with companies.

Read more of this story at Slashdot.

Canadian Telecom Hacked By Suspected China State Group

Hackers suspected of working on behalf of the Chinese government exploited a maximum-severity vulnerability, which had received a patch 16 months earlier, to compromise a telecommunications provider in Canada, officials from that country and the US said Monday. ArsTechnica: "The Cyber Centre is aware of malicious cyber activities currently targeting Canadian telecommunications companies," officials for the center, the Canadian government's primary cyber security agency, said in a statement. "The responsible actors are almost certainly PRC state-sponsored actors, specifically Salt Typhoon." The FBI issued its own nearly identical statement. Salt Typhoon is the name researchers and government officials use to track one of several discreet groups known to hack nations all over the world on behalf of the People's Republic of China. In October 2023, researchers disclosed that hackers had backdoored more than 10,000 Cisco devices by exploiting CVE-2023-20198, a vulnerability with a maximum severity rating of 10. Any switch, router, or wireless LAN controller running Cisco's iOS XE that had the HTTP or HTTPS server feature enabled and exposed to the Internet was vulnerable. Cisco released a security patch about a week after security firm VulnCheck published its report.

Read more of this story at Slashdot.

Scientists Use Bacteria To Turn Plastic Waste Into Paracetamol

Bacteria can be used to turn plastic waste into painkillers, researchers have found, opening up the possibility of a more sustainable process for producing the drugs. From a report: Chemists have discovered E coli can be used to create paracetamol, also known as acetaminophen, from a material produced in the laboratory from plastic bottles. "People don't realise that paracetamol comes from oil currently," said Prof Stephen Wallace, the lead author of the research from the University of Edinburgh. "What this technology shows is that by merging chemistry and biology in this way for the first time, we can make paracetamol more sustainably and clean up plastic waste from the environment at the same time." Writing in the journal Nature Chemistry, Wallace and colleagues report how they discovered that a type of chemical reaction called a Lossen rearrangement, a process that has never been seen in nature, was biocompatible. In other words, it could be carried out in the presence of living cells without harming them. The team made their discovery when they took polyethylene terephthalate (PET) -- a type of plastic often found in food packaging and bottles -- and, using sustainable chemical methods, converted it into a new material.

Read more of this story at Slashdot.

Caps of Glass Bottles Contaminate Beverages With Microplastics

Microplastics are present in all beverages, but those packaged in glass bottles contain more microplastic particles than those in plastic bottles, cartons or cans. This was the surprising finding of a study conducted by the Boulogne-sur-Mer unit of the ANSES Laboratory for Food Safety. The scientists hypothesised that these plastic particles could come from the paint used on bottle caps. Water and wine are less affected than other beverages. These findings have highlighted a source of microplastics in drinks that manufacturers can easily take measures to address.

Read more of this story at Slashdot.

Altman Says Meta Targeting OpenAI Staff With $100 Million Bonuses as AI Race Intensifies

OpenAI CEO Sam Altman accused Meta of attempting to poach his developers with $100 million sign-on bonuses and higher compensation packages as the social media giant races to catch up in AI race. Altman said Meta, which has a $1.8 trillion market capitalization, began making the offers to his team members after falling behind in AI efforts. "I've heard that Meta thinks of us as their biggest competitor," Altman said on the Uncapped podcast [video] hosted by his brother. None of his "best people" had accepted Zuckerberg's offers, he said. Meta has been recruiting top researchers and engineers from rival companies to build a new "superintelligence" team focused on developing AGI. The Facebook parent company has struggled this year to match competitors, facing criticism over its Llama 4 language model and delaying its flagship "Behemoth" AI model.

Read more of this story at Slashdot.

Microsoft Is Calling Too Many Things 'Copilot,' Watchdog Says

An anonymous reader shares a report: Microsoft has a long history of being criticized for coming up with clunky product names, and for changing them so often it's hard for customers to keep up. The company's own employees once joked in a viral video that the iPod would have been called the "Microsoft I-pod Pro 2005 XP Human Ear Professional Edition with Subscription" had it been created by Microsoft. The latest gripe among some employees and customers: The company's tendency to slap "Copilot" on everything AI. "There is a delusion on our marketing side where literally everything has been renamed to have Copilot it in," one employee told Business Insider late last year. "Everything is Copilot. Nothing else matters. They want a Copilot tie-in for everything." Now, an advertising watchdog is weighing in. The Better Business Bureau's National Advertising Division reviewed Microsoft's advertising for its Copilot AI tools. NAD called out Microsoft's "universal use of the product description as 'Copilot'" and said "consumers would not necessarily understand the difference," according to a recent report from the watchdog. "Microsoft is using 'Copilot' across all Microsoft Office applications and Business Chat, despite differences in functionality and the manual steps that are required for Business Chat to produce the same results as Copilot in a specific Microsoft Office app," NAD further explained in an email to BI. NAD did not mention any specific recommendations on product names. But it did say Microsoft should modify claims that Copilot works "seamlessly across all your data" because all of the company's tools with the Copilot moniker don't work together continuously in a way consumers might expect.

Read more of this story at Slashdot.

Field Notes Went From Side Project To Cult Notebook

Field Notes, the analog notebook company that began as designer Aaron Draplin's side project 20 years ago, has sold over 10 million notebooks and operates in 2,000 stores worldwide, co-founder Jim Coudal told Fast Company. The Chicago-based company, which Coudal says just completed its best year for sales and revenue with 2025 tracking to exceed those numbers, has grown from selling 13 notebooks on its launch day to producing quarterly edition runs of 30,000 to 60,000 packs. The brand's subscription model, launched in 2009 with 1,500-pack print runs, now encompasses 67 limited editions and provides both predictable cash flow and regular customer engagement opportunities for the company.

Read more of this story at Slashdot.

California AI Policy Report Warns of 'Irreversible Harms'

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...] "Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says. While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report. Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."

Read more of this story at Slashdot.

Iran Is Going Offline To Prevent Purported Israeli Cyberattacks

In response to escalating tensions with Israel, Iran has begun throttling internet access, with plans to disconnect from the global internet entirely to prevent Israeli cyberattacks. The Iranian government also urges citizens to delete WhatsApp -- one of the country's most popular messaging platforms -- claiming without evidence that the Meta-owned app has been weaponed by Israel to spy on its users. (WhatsApp vehemently denied those claims in a statement to the Associated Press.) Telegram is also said to be blocked as well. The Verge reports: The announcements come amidst the escalating war between Iran and Israel, which broke out after Israel attacked the country on June 12th, and a rise in reported internet outages. Civilians have claimed that they've been unable to access basic but critical telecommunications services, such as messaging apps, maps, and sometimes the internet itself. Cloudflare reported that two major Iranian cellular carriers effectively went offline on Tuesday, and The New York Times reports that even VPNs, which Iranians frequently use to access banned sites like Facebook and Instagram, have become increasingly harder to access. [...] Israel's role in the cyber outages has not been officially confirmed, but independent analysts at NetBlocks noticed a significant reduction of internet traffic originating from Iran on Tuesday, starting at 5:30 PM local time. According to Tasnim, a news network affiliated with the Iranian Revolutionary Guards, Iranians will still have access to the country's state-operated national internet service, though two Iranian officials told the Times that the internal bandwidth could be reduced by up to 80 percent.

Read more of this story at Slashdot.

Senate Passes Stablecoin Bill In Major Win For Crypto Industry

The U.S. Senate has approved the GENIUS Act with a 68-30 final vote that "saw a huge surge of Democrats joining their Republican counterparts," reports CoinDesk. What the bill sets out to do is create the first federal regulatory framework for U.S. stablecoins, requiring issuers to maintain full 1:1 reserves in cash or Treasuries, adhere to regular audits and anti-money laundering rules, and gain regulatory approval -- all while allowing foreign stablecoin access under strict oversight rules. From the report: As written, the bill would set up guardrails around the approval and supervision of U.S. issuers of stablecoins, the dollar-based tokens such as the ones backed by Circle, Ripple and Tether. Firms making these digital assets available to U.S. users would have to meet stringent reserve demands, transparency requirements, money-laundering compliance and regulatory supervision that's also likely to include new capital rules. "This is a win for the U.S., a win for innovation and a monumental step towards appropriate regulation for digital assets in the United States," said Amanda Tuminelli, executive director and chief legal officer of the DeFi Education Fund, in a similar statement. [...] While this is the first significant crypto bill to clear the Senate, it's also the first time a stablecoin bill has passed either chamber, despite years of negotiation in the House Financial Services Committee that managed to produce other major crypto legislation in the previous congressional session. The destiny of the GENIUS Act is also tied closely to the House's own Digital Asset Market Clarity Act, the more sweeping crypto bill that would establish the legal footing of the wider U.S. crypto markets. The stablecoin effort is slightly ahead of the bigger task of the market structure bill, but the industry and their lawmaker allies argue that they're inextricably connected and need to become law together. So far, the Clarity Act has been cleared by the relevant House committees and awaits floor action.

Read more of this story at Slashdot.