Nvidia acquires hi-def mapping startup DeepMap to bolster AV technology

Chipmaker Nvidia is acquiring DeepMap, the high-definition mapping startup announced. The company said its mapping IP will help Nvidia’s autonomous vehicle technology sector, Nvidia Drive.

“The acquisition is an endorsement of DeepMap’s unique vision, technology and people,” said Ali Kani, vice president and general manager of Automotive at Nvidia, in a statement. “DeepMap is expected to extend our mapping products, help us scale worldwide map operations and expand our full self-driving expertise.”

One of the biggest challenges to achieving full autonomy in a passenger vehicle is achieving proper localization and updated mapping information that reflects current road conditions. By integrating DeepMap’s tech, Nvidia’s autonomous stack should have greater precision, giving the vehicle enhanced abilities to locate itself on the road.

“Joining forces with Nvidia will allow our technology to scale more quickly and benefit more people sooner. We look forward to continuing our journey as part of the Nvidia team,” said James Wu, co-founder and CEO of DeepMap, in a statement.

DeepMap — founded by former employees of Google, Apple and Baidu James Wu and Mark Wheeler — can use Nvidia Drive’s software-defined platform to scale its maps across AV fleets quickly and without using too much data storage via over-the-air updates. Nvidia will also invest into new capabilities for DeepMap as part of the partnership.

Nvidia is expected to finalize the acquisition in Q3 2021.

#autonomous-vehicles, #deepmap, #ma, #nvidia, #transportation

0

Microsoft plans to launch dedicated Xbox cloud gaming hardware

Microsoft will soon launch a dedicated device for game streaming, the company announced today. It’s also working with a number of TV manufacturers to build the Xbox experience right into their internet-connected screens and Microsoft plans to bring build cloud gaming to the PC Xbox app later this year, too, with a focus on play-before-you-buy scenarios.

It’s unclear what these new game streaming devices will look like. Microsoft didn’t provide any further details. But chances are, we’re talking about either a Chromecast-like streaming stick or a small Apple TV-like box. So far, we also don’t know which TV manufacturers it will partner with.

It’s no secret that Microsoft is bullish about cloud gaming. With Xbox Game Pass Ultimate, it’s already making it possible for its subscribers to play more than 100 console games on Android, streamed from the Azure cloud, for example. In a few weeks, it’ll open cloud gaming in the browser on Edge, Chrome and Safari, to all Xbox Game Pass Ultimate subscribers (it’s currently in limited beta). And it is bringing Game Pass Ultimate to Australia, Brazil, Mexico and Japan later this year, too.

In many ways, Microsoft is unbundling gaming from the hardware — similar to what Google is trying with Stadia (an effort that, so far, has fallen flat for Google) and Amazon with Luna. The major advantage Microsoft has here is a large library of popular games, something that’s mostly missing on competing services, with the exception of Nvidia’s GeForce Now platform — though that one has a different business model since its focus is not on a subscription but on allowing you to play the games you buy in third-party stores like Steam or the Epic store.

What Microsoft clearly wants to do is expand the overall Xbox ecosystem, even if that means it sells fewer dedicated high-powered consoles. The company likens this to the music industry’s transition to cloud-powered services backed by all-you-can-eat subscription models.

“We believe that games, that interactive entertainment, aren’t really about hardware and software. It’s not about pixels. It’s about people. Games bring people together,”
said Microsoft’s Xbox head Phil Spencer. “Games build bridges and forge bonds, generating mutual empathy among people all over the world. Joy and community -that’s why we’re here.”

It’s worth noting that Microsoft says it’s not doing away with dedicated hardware, though, and is already working on the next generation of its console hardware — but don’t expect a new Xbox console anytime soon.

#amazon, #android, #australia, #brazil, #cloud-gaming, #computing, #directx, #gadgets, #gaming, #google, #hardware, #japan, #luna, #mexico, #microsoft, #nvidia, #phil-spencer, #tc, #xbox, #xbox-cloud-gaming, #xbox-game-pass

0

RTX 3070 Ti review: Nvidia leaves the GPU fast lane (for now)

In a normal GPU marketplace, Nvidia’s new GPU—the RTX 3070 Ti—would land either as a welcome jump or a power-per-watt disappointment. In the chip-shortage squeeze of 2021, however, both its biggest successes and shortcomings may slip by without much fanfare.

The company’s RTX 3070 launched eight months ago at an MSRP of $499, and it did so at an incredibly efficient power-to-performance ratio. There’s simply no better 220 W GPU on the market, as the RTX 3070 noticeably pulled ahead of the 200 W RTX 3060 Ti and AMD’s 230 W RX 6700XT. That efficiency, unsurprisingly, isn’t repeated with the new model released this week: the RTX 3070 Ti. This device’s MSRP jumps 20 percent (to “$599,” but mind the scare quotes), and its TDP screams ahead at 32 percent. We’ve been here before, of course. “Ti”-branded Nvidia cards aren’t usually as power-efficient as their namesakes, and that’s fine, especially if a mild $100 price jump yields a solid increase in performance.

But the RTX 3070 Ti spec sheet doesn’t see Nvidia charge ahead in ways that might match the jump in wattage. And while the 3070 Ti’s performance mostly increases across the board, the gains aren’t in any way a revolution. That may be less about Nvidia’s design prowess and more about squeezing this thing between the impressive duo of the RTX 3070 and RTX 3080 ($699) on an MSRP basis.

Read 12 remaining paragraphs | Comments

#features, #gaming-culture, #nvidia, #nvidia-rtx, #nvidia-rtx-3070-ti, #rtx-3000-series, #tech

0

Nvidia and Valve are bringing DLSS to Linux gaming… sort of

Three different logos, including a cartoon penguin, have been photoshopped together.

Enlarge / Tux looks a lot more comfortable sitting on that logo than he probably should—Nvidia’s drivers are still proprietary, and DLSS support isn’t available for native Linux apps—only Windows apps running under Proton. (credit: Aurich Lawson / Jim Salter / Larry Ewing / Nvidia)

Linux gamers, rejoice—we’re getting Nvidia’s Deep Learning Super Sampling on our favorite platform! But don’t rejoice too hard; the new support only comes on a few games, and it’s only on Windows versions of those games played via Proton.

At Computex 2021, Nvidia announced a collaboration with Valve to bring DLSS support to Windows games played on Linux systems. This is good news, since DLSS can radically improve frame rates without perceptibly altering graphics quality. Unfortunately, as of this month, fewer than 60 games support DLSS in the first place; of those, roughly half work reasonably well in Proton, with or without DLSS.

What’s a DLSS, anyway?

Nvidia's own benchmarking shows well over double the frame rate in <em><a href="https://arstechnica.com/gaming/2019/02/metro-exodus-a-good-single-player-game-to-usher-in-the-pc-ray-tracing-era/">Metro Exodus</a>.</em> Most third-party benchmarks "only" show an improvement of 50 to 75 percent. Note the DLSS image actually looks sharper and cleaner than the non-DLSS in this case!

Nvidia’s own benchmarking shows well over double the frame rate in Metro Exodus. Most third-party benchmarks “only” show an improvement of 50 to 75 percent. Note the DLSS image actually looks sharper and cleaner than the non-DLSS in this case! (credit: nvidia)

If you’re not up on all the gaming graphics jargon, DLSS is an acronym for Deep Learning Super Sampling. Effectively, DLSS takes a low-resolution image and uses deep learning to upsample it to a higher resolution on the fly. The impact of DLSS can be astonishing in games that support the tech—in some cases more than doubling non-DLSS frame rates, usually with little or no visual impact.

Read 10 remaining paragraphs | Comments

#dlss, #gaming-culture, #linux, #linux-gaming, #nvidia, #proton, #steam, #tech

0

Review: Nvidia RTX 3080 Ti is a powerhouse—but good luck finding it at $1,199 MSRP

Nearly nine months ago, the RTX 3000 series of Nvidia graphics cards launched in a beleaguered world as a seeming ray of hope. The series’ first two GPUs, the RTX 3080 and 3070, were nearly all things to all graphics hounds. Nvidia built these cards upon the proprietary successes of the RTX 2000 series and added sheer, every-API-imaginable rasterization power on top.

An “RTX”-optimized game ran great on the line’s opening salvo of the RTX 3080, sure, but even without streamlined ray tracing or the impressive upsampling of DLSS, it tera’ed a lot of FLOPs. Talk about a fun potential purchase for nerds trapped in the house.

Even better, that power came along with more modest MSRPs compared to what we saw in the RTX 2000 series. As I wrote in September 2020:

Read 40 remaining paragraphs | Comments

#amd, #amd-radeon, #features, #gaming-culture, #nvidia, #nvidia-rtx, #rtx-3080, #rtx-3080-ti, #tech

0

Nvidia will add anti-mining flags to the rest of its RTX 3000 GPU series

Coming soon: Nearly identical versions of these GPUs, only with "LTR" logos—and new measures to reduce their mining hash rates.

Enlarge / Coming soon: Nearly identical versions of these GPUs, only with “LTR” logos—and new measures to reduce their mining hash rates. (credit: Sam Machkovech)

Nvidia’s GeForce RTX 3000-branded graphics cards are receiving an update off the factory lines starting this month: hardware-level flags meant to slow down the mining of the popular cryptocurrency Ethereum. Nvidia’s Tuesday announcement confirmed that most consumer-grade GPUs coming out of the company’s factories, ranging from the RTX 3060 Ti to the RTX 3080, will ship with a new sticker to indicate a “Lite Hash Rate,” or “LHR,” on the hardware, driver, and BIOS level.

If this move sounds familiar, that’s because Nvidia already took a massive swing at the cryptomining problem, only to whiff, with February’s RTX 3060. That GPU’s launch came with promises that its Ethereum mining rates had been cut in half from their full potential rate—a move meant to disincentivize miners from buying up limited stock. And in the GPU’s pre-release period, Nvidia PR Director Bryan Del Rizzo claimed on Twitter that “it’s not just a driver thing. There is a secure handshake between the driver, the RTX 3060 silicon, and the BIOS (firmware) that prevents removal of the hash rate limiter.”

Yet shortly after that card’s commercial launch, Nvidia released a developer-specific beta firmware driver that unlocked the GPU’s full mining potential. Remember: that’s firmware, not a BIOS rewrite or anything particularly invasive. With that cat out of the bag, the RTX 3060 forever became an Ethereum mining option.

Read 6 remaining paragraphs | Comments

#gpu, #gpus, #graphics-cards, #nvidia, #nvidia-rtx, #rtx-3080, #tech

0

Arm launches its latest chip design for HPC, data centers and the edge

Arm today announced the launch of two new platforms, Arm Neoverse V1 and Neoverse N2, as well as a new mesh interconnect for them. As you can tell from the name, V1 is a completely new product and maybe the best example yet of Arm’s ambitions in the data center, high-performance computing and machine learning space. N2 is Arm’s next-generation general compute platform that is meant to span use cases from hyperscale clouds to SmartNICs and running edge workloads. It’s also the first design based on the company’s new Armv9 architecture.

Not too long ago, high-performance computing was dominated by a small number of players, but the Arm ecosystem has scored its fair share of wins here recently, with supercomputers in South Korea, India and France betting on it. The promise of V1 is that it will vastly outperform the older N1 platform, with a 2x gain in floating-point performance, for example, and a 4x gain in machine learning performance.

Image Credits: Arm

“The V1 is about how much performance can we bring — and that was the goal,” Chris Bergey, SVP and GM of Arm’s Infrastructure Line of Business, told me. He also noted that the V1 is Arm’s widest architecture yet. He noted that while V1 wasn’t specifically built for the HPC market, it was definitely a target market. And while the current Neoverse V1 platform isn’t based on the new Armv9 architecture yet, the next generation will be.

N2, on the other hand, is all about getting the most performance per watt, Bergey stressed. “This is really about staying in that same performance-per-watt-type envelope that we have within N1 but bringing more performance,” he said. In Arm’s testing, NGINX saw a 1.3x performance increase versus the previous generation, for example.

Image Credits: Arm

In many ways, today’s release is also a chance for Arm to highlight its recent customer wins. AWS Graviton2 is obviously doing quite well, but Oracle is also betting on Ampere’s Arm-based Altra CPUs for its cloud infrastructure.

“We believe Arm is going to be everywhere — from edge to the cloud. We are seeing N1-based processors deliver consistent performance, scalability and security that customers want from Cloud infrastructure,” said Bev Crair, senior VP, Oracle Cloud Infrastructure Compute. “Partnering with Ampere Computing and leading ISVs, Oracle is making Arm server-side development a first-class, easy and cost-effective solution.”

Meanwhile, Alibaba Cloud and Tencent are both investing in Arm-based hardware for their cloud services as well, while Marvell will use the Neoverse V2 architecture for its OCTEON networking solutions.

#alibaba, #arm, #aws, #cloud-infrastructure, #cloud-services, #computing, #enterprise, #india, #machine-learning, #nvidia, #oracle, #oracle-cloud, #softbank-group, #south-korea, #svp, #tc, #technology, #tencent

0

Scale AI founder and CEO Alexandr Wang will join us at TC Sessions: Mobility on June 9

Last week, Scale AI announced a massive $325 million Series E. Led by Dragoneer, Greenoaks Capital and Tiger Global, the raise gives the San Francisco data labeling startup a $7 billion valuation.

Alexandr Wang founded the company back in 2016, while still at MIT. A veteran of Quora and Addepar, Wang built the startup to curate information for AI applications. The company is now a break-even business, with a wide range of top-notch clients, including General Motors, NVIDIA, Nuro and Zoox.

Backed by a ton of venture capital, the company plans a large-scale increase in its headcount, as it builds out new products and expands into additional markets. “One thing that we saw, especially in the course of the past year, was that AI is going to be used for so many different things,” Wang told TechCrunch in a recent interview. “It’s like we’re just sort of really at the beginning of this and we want to be prepared for that as it happens.”

The executive will join us on stage at TC Sessions: Mobility on June 9 to discuss how the company has made a major impact on the industry in its short four years of existence, the role AI is playing in the world of transportation and what the future looks like for Scale AI.

In addition to Wang, TC Sessions: Mobility 2021 will feature an incredible lineup of speakers, presentations, fireside chats and breakouts all focused on the current and future state of mobility — like EVs, micromobility and smart cities for starters — and the investment trends that influence them all.

Investors like Clara Brenner (Urban Innovation Fund), Quin Garcia (Autotech Ventures) and Rachel Holt (Construct Capital) — all of whom will grace our virtual stage. They’ll have plenty of insight and advice to share, including the challenges that startup founders will face as they break into the transportation arena.

You’ll hear from CEOs like Starship Technologies’ Ahti Heinla. The company’s been busy testing delivery robots in real-world markets. Don’t miss his discussion touching on challenges ranging from technology to red tape and what it might take to make last-mile robotic delivery a mainstream reality.

Grab your early bird pass today and save $100 on tickets before prices go up in less than a month.

#addepar, #alexandr-wang, #articles, #artificial-intelligence, #autotech-ventures, #clara-brenner, #deliv, #economy, #entrepreneurship, #executive, #general-motors, #greenoaks-capital, #micromobility, #mit, #nuro, #nvidia, #quora, #rachel-holt, #san-francisco, #scale-ai, #starship-technologies, #startup-company, #tc-sessions-mobility, #technology, #tiger-global, #transportation, #urban-innovation-fund, #venture-capital, #wang

0

Huawei is not a carmaker. It wants to be the Bosch of China

One after another, Chinese tech giants have announced their plans for the auto space over the last few months. Some internet companies, like search engine provider Baidu, decided to recruit help from a traditional carmaker to produce cars. Xiaomi, which makes its own smartphones but has stressed for years it’s a light-asset firm making money from software services, also jumped on the automaking bandwagon. Industry observers are now speculating who will be the next. Huawei naturally comes to their minds.

Huawei seems well-suited for building cars — at least more qualified than some of the pure internet firms — thanks to its history in manufacturing and supply chain management, brand recognition, and vast retail network. But the telecom equipment and smartphone maker repeatedly denied reports claiming it was launching a car brand. Instead, it says its role is to be a Tier 1 supplier for automakers or OEMs (original equipment manufacturers).

Huawei is not a carmaker, the company’s rotating chairman Eric Xu reiterated recently at the firm’s annual analyst conference in Shenzhen.

“Since 2012, I have personally engaged with the chairmen and CEOs of all major car OEMs in China as well as executives of German and Japanese automakers. During this process, I found that the automotive industry needs Huawei. It doesn’t need the Huawei brand, but instead, it needs our ICT [information and communication technology] expertise to help build future-oriented vehicles,” said Xu, who said the strategy has not changed since it was incepted in 2018.

There are three major roles in auto production: branded vehicle manufacturers like Audi, Honda, Tesla, and soon Apple; Tier 1 companies that supply car parts and systems directly to carmakers, including established ones like Bosch and Continental, and now Huawei; and lastly, chip suppliers including Nvidia, Intel and NXP, whose role is increasingly crucial as industry players make strides toward highly automated vehicles. Huawei also makes in-house car chips.

“Huawei wants to be the next-generation Bosch,” an executive from a Chinese robotaxi startup told TechCrunch, asking not to be named.

Huawei makes its position as a Tier 1 supplier unequivocal. So far it has secured three major customers: BAIC, Chang’an Automobile, and Guangzhou Automobile Group.

“We won’t have too many of these types of in-depth collaboration,” Xu assured.

L4 autonomy?

Arcfox, a new electric passenger car brand under state-owned carmaker BAIC, debuted its Alpha S model quipped with Huawei’s “HI” systems, short for Huawei Inside (not unlike “Powered by Intel”), during the annual Shanghai auto show on Saturday. The electric sedan, priced between 388,900 yuan and 429,900 yuan (about $60,000 and $66,000), comes with Huawei functions including an operating system driven by Huawei’s Kirin chip, a range of apps that run on HarmonyOS, automated driving, fast charging, and cloud computing.

Perhaps most eye-catching is that Alpha S has achieved Level 4 capabilities, which Huawei confirmed with TechCrunch.

That’s a bold statement, for it means that the car will not require human intervention in most scenarios, that is, drivers can take their hands off the wheels and nap.

There are some nuances to this claim, though. In a recent interview, Su Qing, general manager for autonomous driving at Huawei, said Alpha S is L4 in terms of “experience” but L2 according to “legal” responsibilities. China has only permitted a small number of companies to test autonomous vehicles without safety drivers in restricted areas and is far from letting consumer-grade driverless cars roam urban roads.

As it turned out, Huawei’s “L4” functions were shown during a demo, during which the Arcfox car traveled for 1,000 kilometers in a busy Chinese city without human intervention, though a safety driver was present in the driving seat. Automating the car is a stack of sensors, including three lidars, six millimeter-wave radars, 13 ultrasonic radars and 12 cameras, as well as Huawei’s own chipset for automated driving.

“This would be much better than Tesla,” Xu said of the car’s capabilities.

But some argue the Huawei-powered vehicle isn’t L4 by strict definition. The debate seems to be a matter of semantics.

“Our cars you see today are already L4, but I can assure you, I dare not let the driver leave the car,” Su said. “Before you achieve really big MPI [miles per intervention] numbers, don’t even mention L4. It’s all just demos.”

“It’s not L4 if you can’t remove the safety driver,” the executive from the robotaxi company argued. “A demo can be done easily, but removing the driver is very difficult.”

“This technology that Huawei claims is different from L4 autonomous driving,” said a director working for another Chinese autonomous vehicle startup. “The current challenge for L4 is not whether it can be driverless but how to be driverless at all times.”

L4 or not, Huawei is certainly willing to splurge on the future of driving. This year, the firm is on track to spend $1 billion on smart vehicle components and tech, Xu said at the analyst event.

A 5G future

Many believe 5G will play a key role in accelerating the development of driverless vehicles. Huawei, the world’s biggest telecom equipment maker, would have a lot to reap from 5G rollouts across the globe, but Xu argued the next-gen wireless technology isn’t a necessity for self-driving vehicles.

“To make autonomous driving a reality, the vehicles themselves have to be autonomous. That means a vehicle can drive autonomously without external support,” said the executive.

“Completely relying on 5G or 5.5G for autonomous driving will inevitably cause problems. What if a 5G site goes wrong? That would raise a very high bar for mobile network operators. They would have to ensure their networks cover every corner, don’t go wrong in any circumstances and have high levels of resilience. I think that’s simply an unrealistic expectation.”

Huawei may be happy enough as a Tier 1 supplier if it ends up taking over Bosch’s market. Many Chinese companies are shifting away from Western tech suppliers towards homegrown options in anticipation of future sanctions or simply to seek cheaper alternatives that are just as robust. Arcfox is just the beginning of Huawei’s car ambitions.

#apple, #artificial-intelligence, #asia, #audi, #automotive, #bosch, #china, #continental, #eric-xu, #harmony, #harmonyos, #honda, #huawei, #intel, #nvidia, #nxp, #operating-system, #shanghai, #shenzhen, #supply-chain-management, #tc, #tesla, #transportation, #wireless-technology, #xiaomi

0

UK gov’t triggers national security scrutiny of Nvidia-Arm deal

The UK government has intervened to trigger public interest scrutiny of chipmaker’s Nvidia’s planned to buy Arm Holdings.

The secretary of state for digital issues, Oliver Dowden, said today that the government wants to ensure that any national security implications of the semiconductor deal are explored.

Nvidia’s $40BN acquisition of UK-based Arm was announced last September but remains to be cleared by regulators.

The UK’s Competition and Markets Authority (CMA) began to solicit views on the proposed deal in January.

Domestic opposition to Nvidia’s plan has been swift, with one of the original Arm co-founders kicking off a campaign to ‘save Arm’ last year. Hermann Hauser warned that Arm’s acquisition by a U.S. entity would end its position as a company independent of U.S. interests — risking the U.K.’s economic sovereignty by surrendering its most powerful trade weapon.

The intervention by Department of Digital, Media, Culture and Sport (DCMS) — using statutory powers set out in the Enterprise Act 2002 — means the competition regulator has been instructed to begin a phase 1 investigation.

The CMA has a deadline of July 30 to submit its report to the secretary of state.

Commenting in a statement, Dowden said: “Following careful consideration of the proposed takeover of ARM, I have today issued an intervention notice on national security grounds. As a next step and to help me gather the relevant information, the UK’s independent competition authority will now prepare a report on the implications of the transaction, which will help inform any further decisions.”

“We want to support our thriving UK tech industry and welcome foreign investment but it is appropriate that we properly consider the national security implications of a transaction like this,” he added.

At the completion of the CMA’s phase 1 investigation Dowden will have an option to clear the deal, i.e. if no national security or competition concerns have been identified; or to clear it with remedies to address any identified concerns.

He could also refer the transaction for further scrutiny by instructing the CMA to carry out an in-depth phase 2 investigation.

After the phase 1 report has been submitted there is no set period when the secretary of state must make a decision on next steps — but DCMS notes that a decision should be made as soon as “reasonably practicable” to reduce uncertainty.

While Dowden’s intervention has been made on national security grounds, additional concerns have been raised about impact of an Nvidia take-over of Arm — specifically on U.K. jobs and on Arm’s open licensing model.

Nvidia sought to address those concerns last year, claiming it’s committed to Arm’s licensing model and pledging to expand the Cambridge, UK offices of Arm — saying it would create “a new global center of excellence in AI research” at the UK campus.

However it’s hard to see what commercial concessions could be offered to assuage concern over the ramifications of an Nvidia-owed Arm on the UK’s economic sovereignty. That’s because it’s a political risk, which would require a political solution to allay, such as at a treaty level — something which isn’t in Nvidia’s gift (alone) to give.

National security concerns are a rising operational risk for tech companies involved in the supply of cutting edge infrastructure, such as semiconductor design and next-gen networks — where a relative paucity of competitors not only limits market choice but amps up the political calculations.

Proposed mergers are one key flash point as market consolidation takes on an acute politico-economic dimension.

However tech companies’ operations are being more widely squeezed in the name of national security — such as, in recent years, the U.S. government’s attacks on China-based 5G infrastructure suppliers like Huawei, with former president Trump seeking to have the company barred from supplying next-gen networks not only within the U.S. but to national networks of Western allies.

Nor has (geo)political pressure been applied purely over key infrastructure companies in recent years; with Trump claiming a national security justification to try and shake down the Chinese-owned social networking company, TikTok — in another example that speaks to how tech tools are being coopted into wider geopolitical power-plays, fuelled by countries’ economic and political self-interest.

#arm-holdings, #artificial-intelligence, #cambridge, #cma, #competition-and-markets-authority, #computer-security, #europe, #huawei, #ma, #national-security, #nvidia, #oliver-dowden, #security, #semiconductor, #tiktok, #trump, #u-s-government, #uk-government, #united-kingdom, #united-states

0

Intel, Nvidia, TSMC execs agree: Chip shortage could last into 2023

Intel, Nvidia, TSMC execs agree: Chip shortage could last into 2023

(credit: Intel)

How many years will the ongoing chip shortage affect technology firms across the world? This week, multiple tech executives offered their own dismal estimates as part of their usual public financial disclosures, with the worst one coming in at “a couple of years.”

That nasty estimate comes from Intel CEO Pat Gelsinger, who offered that vague timeframe to The Washington Post in an interview on Tuesday. He clarified that was an estimate for how long it would take the company to “build capacity” to potentially address supply shortages. The conversation came as Intel offered to step up for two supply chains particularly pinched by the silicon drought: medical supplies and in-car computer systems.

In previous statements, Gelsinger pointed to Intel’s current $20 billion plan to build a pair of factories in Arizona, and this week’s interview added praise for President Joe Biden’s proposed $50 billion chip-production infrastructure plan—though Gelsinger indicated that Biden should be ready to spend more than that.

Read 3 remaining paragraphs | Comments

#chip-shortage, #gpus, #intel, #nvidia, #tech, #tsmc

0

China’s Xpeng in the race to automate EVs with lidar

Elon Musk famously said any company relying on lidar is “doomed.” Tesla instead believes automated driving functions are built on visual recognition and is even working to remove the radar. China’s Xpeng begs to differ.

Founded in 2014, Xpeng is one of China’s most celebrated electric vehicle startups and went public when it was just six years old. Like Tesla, Xpeng sees automation as an integral part of its strategy; unlike the American giant, Xpeng uses a combination of radar, cameras, high-precision maps powered by Alibaba, localization systems developed in-house, and most recently, lidar to detect and predict road conditions.

“Lidar will provide the 3D drivable space and precise depth estimation to small moving obstacles even like kids and pets, and obviously, other pedestrians and the motorbikes which are a nightmare for anybody who’s working on driving,” Xinzhou Wu, who oversees Xpeng’s autonomous driving R&D center, said in an interview with TechCrunch.

“On top of that, we have the usual radar which gives you location and speed. Then you have the camera which has very rich, basic semantic information.”

Xpeng is adding lidar to its mass-produced EV model P5, which will begin delivering in the second half of this year. The car, a family sedan, will later be able to drive from point A to B based on a navigation route set by the driver on highways and certain urban roads in China that are covered by Alibaba’s maps. An older model without lidar already enables assisted driving on highways.

The system, called Navigation Guided Pilot, is benchmarked against Tesla’s Navigate On Autopilot, said Wu. It can, for example, automatically change lanes, enter or exit ramps, overtake other vehicles, and maneuver another car’s sudden cut-in, a common sight in China’s complex road conditions.

“The city is super hard compared to the highway but with lidar and precise perception capability, we will have essentially three layers of redundancy for sensing,” said Wu.

By definition, NGP is an advanced driver-assistance system (ADAS) as drivers still need to keep their hands on the wheel and take control at any time (Chinese laws don’t allow drivers to be hands-off on the road). The carmaker’s ambition is to remove the driver, that is, reach Level 4 autonomy two to four years from now, but real-life implementation will hinge on regulations, said Wu.

“But I’m not worried about that too much. I understand the Chinese government is actually the most flexible in terms of technology regulation.”

The lidar camp

Musk’s disdain for lidar stems from the high costs of the remote sensing method that uses lasers. In the early days, a lidar unit spinning on top of a robotaxi could cost as much as $100,000, said Wu.

“Right now, [the cost] is at least two orders low,” said Wu. After 13 years with Qualcomm in the U.S., Wu joined Xpeng in late 2018 to work on automating the company’s electric cars. He currently leads a core autonomous driving R&D team of 500 staff and said the force will double in headcount by the end of this year.

“Our next vehicle is targeting the economy class. I would say it’s mid-range in terms of price,” he said, referring to the firm’s new lidar-powered sedan.

The lidar sensors powering Xpeng come from Livox, a firm touting more affordable lidar and an affiliate of DJI, the Shenzhen-based drone giant. Xpeng’s headquarters is in the adjacent city of Guangzhou about 1.5 hours’ drive away.

Xpeng isn’t the only one embracing lidar. Nio, a Chinese rival to Xpeng targeting a more premium market, unveiled a lidar-powered car in January but the model won’t start production until 2022. Arcfox, a new EV brand of Chinese state-owned carmaker BAIC, recently said it would be launching an electric car equipped with Huawei’s lidar.

Musk recently hinted that Tesla may remove radar from production outright as it inches closer to pure vision based on camera and machine learning. The billionaire founder isn’t particularly a fan of Xpeng, which he alleged owned a copy of Tesla’s old source code.

In 2019, Tesla filed a lawsuit against Cao Guangzhi alleging that the former Tesla engineer stole trade secrets and brought them to Xpeng. XPeng has repeatedly denied any wrongdoing. Cao no longer works at Xpeng.

Supply challenges

While Livox claims to be an independent entity “incubated” by DJI, a source told TechCrunch previously that it is just a “team within DJI” positioned as a separate company. The intention to distance from DJI comes as no one’s surprise as the drone maker is on the U.S. government’s Entity List, which has cut key suppliers off from a multitude of Chinese tech firms including Huawei.

Other critical parts that Xpeng uses include NVIDIA’s Xavier system-on-the-chip computing platform and Bosch’s iBooster brake system. Globally, the ongoing semiconductor shortage is pushing auto executives to ponder over future scenarios where self-driving cars become even more dependent on chips.

Xpeng is well aware of supply chain risks. “Basically, safety is very important,” said Wu. “It’s more than the tension between countries around the world right now. Covid-19 is also creating a lot of issues for some of the suppliers, so having redundancy in the suppliers is some strategy we are looking very closely at.”

Taking on robotaxis

Xpeng could have easily tapped the flurry of autonomous driving solution providers in China, including Pony.ai and WeRide in its backyard Guangzhou. Instead, Xpeng becomes their competitor, working on automation in-house and pledges to outrival the artificial intelligence startups.

“The availability of massive computing for cars at affordable costs and the fast dropping price of lidar is making the two camps really the same,” Wu said of the dynamics between EV makers and robotaxi startups.

“[The robotaxi companies] have to work very hard to find a path to a mass-production vehicle. If they don’t do that, two years from now, they will find the technology is already available in mass production and their value become will become much less than today’s,” he added.

“We know how to mass-produce a technology up to the safety requirement and the quarantine required of the auto industry. This is a super high bar for anybody wanting to survive.”

Xpeng has no plans of going visual-only. Options of automotive technologies like lidar are becoming cheaper and more abundant, so “why do we have to bind our hands right now and say camera only?” Wu asked.

“We have a lot of respect for Elon and his company. We wish them all the best. But we will, as Xiaopeng [founder of Xpeng] said in one of his famous speeches, compete in China and hopefully in the rest of the world as well with different technologies.”

5G, coupled with cloud computing and cabin intelligence, will accelerate Xpeng’s path to achieve full automation, though Wu couldn’t share much detail on how 5G is used. When unmanned driving is viable, Xpeng will explore “a lot of exciting features” that go into a car when the driver’s hands are freed. Xpeng’s electric SUV is already available in Norway, and the company is looking to further expand globally.

#alibaba, #artificial-intelligence, #asia, #automation, #automotive, #baic, #bosch, #cars, #china, #cloud-computing, #driver, #electric-car, #elon-musk, #emerging-technologies, #engineer, #founder, #huawei, #lasers, #li-auto, #lidar, #livox, #machine-learning, #nio, #norway, #nvidia, #qualcomm, #robotaxi, #robotics, #self-driving-cars, #semiconductor, #shenzhen, #tc, #tesla, #transport, #transportation, #u-s-government, #united-states, #wu, #xavier, #xiaopeng, #xpeng

0

Dell Alienware launches its first AMD-powered gaming laptop since 2007

This time last year, we covered one of the first Ryzen 9 gaming laptops—Asus’ ROG Zephyrus G14. A year later, Dell is joining the Ryzen-powered gaming laptop party with its new Alienware m15 Ryzen Edition.

Last year, the Achilles heel of Ryzen-powered gaming laptops was mediocre GPU selection—for whatever reason, most manufacturers didn’t spec RTX 3000-series GPUs along with Ryzen processors. That’s thankfully no longer the case in 2021—the new Alienware m15 pairs Ryzen 7 5800H or Ryzen 9 4900HX with up to 32GiB RAM and a choice of RTX 3060 or RTX 3070 graphics. (Asus is also offering high-end GPUs this year—we’re in the process of hands-on testing a ROG Zephyrus G15 with Ryzen 9 5900HS and RTX 3070 this week.)

You can get a quick peek at the Alienware m15 in this short hype video.

The new m15 offers three display choices—1080 p at 165 Hz or 360 Hz, or 1440 p at 240 Hz—and an optional keyboard upgrade to Cherry MX. There’s a standard 2.5Gbps wired Ethernet adapter to go with the Killer AX1650 Wi-Fi 6—and unlike the ROG Zephyrus gaming laptops, the Alienware m15 has an integrated 720 p webcam.

Read 3 remaining paragraphs | Comments

#alienware, #amd-ryzen-5000, #dell, #gaming-laptop, #nvidia, #nvidia-rtx, #ryzen-5000, #ryzen-mobile, #tech

0

Nvidia now lets “RTX Voice” noise cancellation run on GTX-level cards

Look how much smoother those lines get!

Enlarge / Look how much smoother those lines get! (credit: Nvidia)

Last year, Nvidia released RTX Voice, a pretty good GPU-driven noise-cancellation technology that could be hacked to run on non-RTX graphics cards. Since then, it turns out that Nvidia has quietly and officially unlocked the ability to reduce outside noise when using a microphone on systems with lower-powered GTX-level graphics cards as well.

A quick hat tip to Tom’s Hardware, which recently noticed an extant version of Nvidia’s RTX Voice Setup Guide. It currently notes that “to use RTX Voice, you must be using an NVIDIA GTX or RTX graphics card, update to Driver 410.18 or newer, and be on Windows 10 [emphasis added].”

The addition of GTX cards to the “requirements” section of the guide was made around the end of October 2020, according to a quick perusal of the Internet Archive. About a month before that, Nvidia added an update to the page noting that “RTX Voice is now enabled for any NVIDIA GeForce, Quadro or TITAN GPU [emphasis added].”

Read 6 remaining paragraphs | Comments

#gaming-culture, #graphics-cards, #gtx, #nvidia, #rtx

0

Arm announces the next generation of its processor architecture

Arm today announced Armv9, the next generation of its chip architecture. Its predecessor, Armv8 launched a decade ago and while it has seen its fair share of changes and updates, the new architecture brings a number of major updates to the platform that warrant a shift in version numbers. Unsurprisingly, Armv9 builds on V8 and is backward compatible, but it specifically introduces new security, AI, signal processing and performance features.

Over the last five years, more than 100 billion Arm-based chips have shipped. But Arm believes that its partners will ship over 300 billion in the next decade. We will see the first ArmV9-based chips in devices later this year.

Ian Smythe, Arm’s VP of Marketing for its client business, told me that he believes this new architecture will change the way we do computing over the next decade. “We’re going to deliver more performance, we will improve the security capabilities […] and we will enhance the workload capabilities because of the shift that we see in compute that’s taking place,” he said. “The reason that we’ve taken these steps is to look at how we provide the best experience out there for handling the explosion of data and the need to process it and the need to move it and the need to protect it.”

That neatly sums up the core philosophy behind these updates. On the security side, ArmV9 will introduce Arm’s confidential compute architecture and the concept of Realms. These Realms enable developers to write applications where the data is shielded from the operating system and other apps on the device. Using Realms, a business application could shield sensitive data and code from the rest of the device, for example.

Image Credits: Arm

“What we’re doing with the Arm Confidential Compute Architecture is worrying about the fact that all of our computing is running on the computing infrastructure of operating systems and hypervisors,” Richard Grisenthwaite, the chief architect at Arm, told me. “That code is quite complex and therefore could be penetrated if things go wrong. And it’s in an incredibly trusted position, so we’re moving some of the workloads so that [they are] running on a vastly smaller piece of code. Only the Realm manager is the thing that’s actually capable of seeing your data while it’s in action. And that would be on the order of about a 10th of the size of a normal hypervisor and much smaller still than an operating system.”

As Grisenthwaite noted, it took Arm a few years to work out the details of this security architecture and ensure that it is robust enough — and during that time Spectre and Meltdown appeared, too, and set back some of Arm’s initial work because some of the solutions it was working on would’ve been vulnerable to similar attacks.

Image Credits: Arm

Unsurprisingly, another area the team focused on was enhancing the CPU’s AI capabilities. AI workloads are now ubiquitous. Arm had already done introduced its Scalable Vector Extension (SVE) a few years ago, but at the time, this was meant for high-performance computing solutions like the Arm-powered Fugaku supercomputer.

Now, Arm is introducing SVE2 to enable more AI and digital signal processing (DSP) capabilities. Those can be used for image processing workloads, as well as other IoT and smart home solutions, for example. There are, of course, dedicated AI chips on the market now, but Arm believes that the entire computing stack needs to be optimized for these workloads and that there are a lot of use cases where the CPU is the right choice for them, especially for smaller workloads.

“We regard machine learning as appearing in just about everything. It’s going to be done in GPUs, it’s going to be done in dedicated processors, neural processors, and also done in our CPUs. And it’s really important that we make all of these different components better at doing machine learning,” Grisenthwaite said.

As for raw performance, Arm believes its new architecture will allow chip manufacturers to gain more than 30% in compute power over the next two chip generations, both for mobile CPUs but also the kind of infrastructure CPUs that large cloud vendors like AWS now offer their users.

“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow’s mobile communications devices,” said Min Goo Kim, the executive vice president of SoC development at Samsung Electronics. “As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.”

#ai-chips, #artificial-intelligence, #aws, #companies, #computers, #computing, #dsp, #exynos, #image-processing, #machine-learning, #nvidia, #operating-system, #operating-systems, #samsung-electronics, #soc, #softbank-group, #tc

0

How a “Switch Pro” leak may point to Nvidia’s megaton mobile-gaming plans

The Nvidia logo is photoshopped onto a mobile gaming console.

Enlarge / Nvidia is already tied heavily to the existing Nintendo Switch, since it includes the company’s Tegra X1 SoC. But recent rumors make us wonder about Nvidia’s potential push into mobile 3D-rendering dominance. (credit: Aurich Lawson / Ars Technica)

Earlier this week, Bloomberg Japan’s report on a rumored Nintendo Switch “Pro” version exploded with a heavy-duty allegation: all those rumors about a “4K” Switch might indeed be true after all. The latest report on Tuesday teased a vague bump in specs like clock speed and memory, which could make the Switch run better… but jumping all the way to 4K resolution would need a massive bump from the 2016 system’s current specs.

What made the report so interesting was that it had a technical answer to that seemingly impossible rendering challenge. Nvidia, Nintendo’s exclusive SoC provider for existing Switch models, will remain on board for this refreshed model, Bloomberg said, and that contribution will include the tantalizing, Nvidia-exclusive “upscaling” technology known as Deep Learning Super Sampling (DLSS).

Since that report went live, I’ve done some thinking, and I can’t shake a certain feeling. Nvidia has a much bigger plan for the future of average users’ computing than they’ve publicly let on.

Read 23 remaining paragraphs | Comments

#dlss, #fortnite, #gaming-culture, #genshin-impact, #nintendo-switch, #nvidia

0

Report: The next Nintendo Switch will deliver 4K on TVs via Nvidia’s DLSS

An artist's estimation of how a new DLSS-fueled Nintendo Switch dock might look.

Enlarge / An artist’s estimation of how a new DLSS-fueled Nintendo Switch dock might look. (credit: Getty Images / Sam Machkovech)

As leaks begin to mount about a new Nintendo Switch revision, colloquially referred to as “Switch Pro,” one recent suggestion had enthusiasts scratching their heads: 4K support. How exactly would a dockable console like Switch, designed for portability and decent battery life, muster the teraflops to run games at 4K resolution?

Bloomberg Japan, who previously reported on Nintendo’s upcoming manufacturing plans, now has an answer: a new chipset, courtesy of Nvidia, that will leverage the GPU maker’s proprietary upscaling system, according to “people familiar with the matter.” This system, dubbed Deep Learning Super Sampling (DLSS), has so far only been available on Nvidia’s RTX line of graphics cards, and it relies on “tensor” GPU processing cores. Their machine learning computations, as trained on thousands of hours of existing game footage, interpret a game’s lower-resolution signal, then upscale the image to resolutions as high as 4K (or in the case of the $1,499 RTX 3090, as high as 8K).

If you’re unfamiliar with DLSS, check out my recent review of the RTX 3060, where I reviewed the progress Nvidia has made with DLSS since its retail debut in late 2018. It has progressed enough to take native resolutions as low as 1080p and boost them closer to 4K, often with fewer visual artifacts than image-smoothing methods like temporal anti-aliasing (TAA).

Read 6 remaining paragraphs | Comments

#deep-learning-super-sampling, #dlss, #gaming-culture, #nintendo-switch, #nintendo-switch-pro, #nvidia, #switch-pro

0

Nvidia raises GeForce Now subscription plan to $10 per month

Nvidia’s cloud gaming service GeForce Now has announced some changes when it comes to subscription plans. Starting today, paid memberships now cost $9.99 per month, or $99.99 per year — they are now called ‘Priority’ memberships.

If you’re an existing ‘Founders’ member, you’ll keep the same subscription price as long as you remain a subscriber. If you stop your subscription at any point, you won’t be able to pay $5 per month again.

Last year, when Nvidia originally introduced paid plans for GeForce Now, the company was pretty transparent with its user base. You could pay $4.99 per month to access the Founders edition, but the company was going to raise the subscription fee at some point. And it sounds like Nvidia has made up its mind and thinks the paid subscription is worth $9.99 per month.

If you’re not familiar with GeForce Now, it lets you start a game on a powerful gaming PC in a data center near you. You get a video stream on your computer, mobile phone, tablet or TV of the game running in a data center — GeForce Now uses a web app on iOS and iPadOS and is available on a limited number of Android TV devices. When you press a button on your controller, the action is relayed to the server so that you can interact with the game. All of this happens in tens of milliseconds, making it one of the smoothest cloud gaming experience available right now.

Compared to Google Stadia and Amazon Luna, Nvidia isn’t starting its own game store. GeForce Now customers launch games that they already own. The platform supports Steam, Epic Games, GOG.com and Ubisoft’s launcher.

Game publishers have to opt in to GeForce Now, which means that you can’t launch all your games that you own in your Steam library. Right now, GeForce Now supports around 800 games that you can find on this page.

If you want to try GeForce Now, you can start playing for free. Nvidia offers a free membership that should be considered as a free trial. First, you have to wait in a queue until a free server is available — it can take five, ten of fifteen minutes.

After that, you’re limited to one-hour sessions. When you’ve played for an hour, you’re kicked out of the server. You can still start the game again, but you’ll have to go through the queue one more time.

If you become a paid member, games start nearly instantly and you can play up to six hours at a time. Similarly, you can start the game instantly after your six hours are up. Paid members also get RTX-enabled graphics.

When it comes to specifications, Nvidia has several configurations with different CPUs, graphic cards and RAM. If you play Fortnite, you might not get the best rig as you can get very high graphics on a medium-range PC. But if you launch Cyberpunk 2077, the service tries to prioritize better rigs.

Nvidia says it has attracted nearly 10 million users for its cloud gaming service. It’s unclear how many of them are paying for a subscription.

The company doubled the number of data centers in the last year. There are now more than 20 data centers operated by Nvidia or local partners. The company plans to expand capacity in existing data centers, add new data centers in Phoenix, Montreal and Australia.

There will be some quality-of-life updates as well, such as the ability to link games with your account to make it easier to launch them and more aggressive preloading of games.

Image Credits: Nvidia

#gaming, #geforce-now, #nvidia, #nvidia-geforce-now, #tc

0

AMD Radeon RX 6700 review: If another sold-out GPU falls in the forest…

Look, I’ll level with you: reviewing a GPU amidst a global chip shortage is ludicrous enough to count as dark comedy. Your ability to buy new, higher-end GPUs from either Nvidia or AMD has been hamstrung for months—a fact borne out by their very low ranks on Steam’s gaming PC stats gathered around the world.

As of press time, AMD’s latest “Big Navi” GPUs barely make a ripple in Steam’s list. That’s arguably a matter of timing, with their November 2020 launch coming two months after Nvidia began shipping its own 3000-series GPUs. But how much is that compounded by low supplies and shopping bots? AMD isn’t saying, and on the eve of the Radeon RX 6700’s launch, the first in its “Little Navi” line, the company’s assurances aren’t entirely comforting.

In an online press conference ahead of the launch, AMD Product Manager Greg Boots offered the usual platitudes: “a ton of demand out there,” “we’re doing everything we can,” that sort of thing. He mentioned a couple of AMD’s steps that may help this time around. For one, AMD’s “reference” GPU model is launching simultaneously with partner cards, so if the inventory is actually out there (we certainly don’t know), that at least puts higher numbers of GPUs in the day-one pool. Also, Boots emphasized stock being made available specifically for brick-and-mortar retailers—though he didn’t offer a ratio of how many GPUs are going to those shops, compared to online retailers.

Read 25 remaining paragraphs | Comments

#amd, #features, #gpus, #nvidia, #radeon, #radeon-rx-6700xt, #tech

0

Nvidia accidentally releases driver to un-nerf cryptocurrency mining

Nvidia accidentally releases driver to un-nerf cryptocurrency mining

Enlarge (credit: Aurich Lawson | Getty Images)

When the value of cryptocurrencies soared back in 2017, it created a huge shortage of graphics cards, as the parallel processing capabilities of a graphics card make it ideal for mining cryptocurrencies like Ethereum (but not bitcoin). That created a financial windfall for the leading graphics card makers, but it also angered gamers, the companies’ traditional customers.

In recent months, cryptocurrencies have once again been soaring to record highs, which has driven another spike in graphics card prices. So when Nvidia rolled out its RTX 3060 graphics card last month, the company deliberately limited the card’s capacity for mining cryptocurrency. Our quick-and-dirty test suggested that Nvidia reduced the card’s mining capacity by roughly half. The hope was that miners would leave the card alone, ensuring that some cards would continue to be available for the gaming market.

Unfortunately, the mining limitation appears to have been implemented in the software. And Nvidia accidentally released a new driver that unlocked the 3060’s mining capacity. Nvidia acknowledged the mistake in a statement to the Verge.

Read 3 remaining paragraphs | Comments

#blockchain, #nvidia, #policy

0

Nvidia RTX 3060 review: A fine $329 GPU, but ho-hum among the 3000 series

The EVGA RTX 3060, as posed in front of some sort of high-tech honeycomb array.

Enlarge / The EVGA RTX 3060, as posed in front of some sort of high-tech honeycomb array. (credit: EVGA / Nvidia)

The past year of graphics card reviews has been an exercise in dramatic asterisks, and for good reason. Nvidia and AMD have seen fit to ensure members of the press have access to new graphics cards ahead of their retail launches, which has placed us in a comfy position to praise each of their latest-gen offerings: good prices, tons of power.

Then we see our comment sections explode with unsatisfied customers wondering how the heck to actually buy them. I’ve since softened my tune on these pre-launch previews.

I say all of this up front about the Nvidia RTX 3060, going on sale today, February 25 (at 12pm ET, if you’re interested in entering the day-one sales fray) because it’s the first Nvidia GPU I’ve tested in a while to make my cautious stance easier. The company has been on a tear with its RTX 3000-series of cards in terms of sheer consumer value, particularly compared to equivalent prior-gen cards (the $1,499 RTX 3090 notwithstanding), but the $329 RTX 3060 (not to be confused with December’s 3060 Ti) doesn’t quite pull the same weight. It’s a good 1080p card with 1440p room to flex, but it’s not the next-gen jump in its Nvidia price category we’ve grown accustomed to.

Read 24 remaining paragraphs | Comments

#gaming-culture, #graphics-card, #graphics-cards, #nvidia, #nvidia-rtx, #rtx-3060, #tech

0

Nvidia wants to buy CPU designer Arm—Qualcomm is not happy about it

Some current Arm licensees view the proposed acquisition as highly toxic.

Enlarge / Some current Arm licensees view the proposed acquisition as highly toxic. (credit: Aurich Lawson / Nvidia)

In September 2020, Nvidia announced its intention to buy Arm, the license holder for the CPU technology that powers the vast majority of mobile and high-powered embedded systems around the world.

Nvidia’s proposed deal would acquire Arm from Japanese conglomerate SoftBank for $40 billion—a number which is difficult to put into perspective. Forty billion dollars would represent one of the largest tech acquisitions of all time, but 40 Instagrams or so doesn’t seem like that much to pay for control of the architecture supporting every well-known smartphone in the world, plus a staggering array of embedded controllers, network routers, automobiles, and other devices.

Today’s Arm doesn’t sell hardware

Arm’s business model is fairly unusual in the hardware space, particularly from a consumer or small business perspective. Arm’s customers—including hardware giants such as Apple, Qualcomm, and Samsung—aren’t buying CPUs the way you’d buy an Intel Xeon or AMD Ryzen. Instead, they’re purchasing the license to design and/or manufacture CPUs based on Arm’s intellectual property. This typically means selecting one or more reference core designs, putting several of them in one system on chip (SoC), and tying them all together with the necessary cache and other peripherals.

Read 9 remaining paragraphs | Comments

#acquisition, #antitrust, #arm, #cpu, #gpu, #merger, #mobile-cpu, #nvidia, #processors, #qualcomm, #regulation, #tech

0

NeuReality raises $8M for its novel AI inferencing platform

NeuReality, an Israeli AI hardware startup that is working on a novel approach to improving AI inferencing platforms by doing away with the current CPU-centric model, is coming out of stealth today and announcing an $8 million seed round. The group of investors includes Cardumen Capital, crowdfunding platform OurCrowd and Varana Capital. The company also today announced that Naveen Rao, the GM of Intel’s AI Products Group and former CEO of Nervana System (which Intel acquired), is joining the company’s board of directors.

The founding team, CEO Moshe Tanach, VP of operations Tzvika Shmueli and VP for very large-scale integration Yossi Kasus, has a background in AI but also networking, with Tanach spending time at Marvell and Intel, for example, Shmueli at Mellanox and Habana Labs and Kasus at Mellanox, too.

It’s the team’s networking and storage knowledge and seeing how that industry built its hardware that now informs how NeuReality is thinking about building its own AI platform. In an interview ahead of today’s announcement, Tanach wasn’t quite ready to delve into the details of NeuReality’s architecture, but the general idea here is to build a platform that will allow hyperscale clouds and other data center owners to offload their ML models to a far more performant architecture where the CPU doesn’t become a bottleneck.

“We kind of combined a lot of techniques that we brought from the storage and networking world,” Tanach explained. Think about traffic manager and what it does for Ethernet packets. And we applied it to AI. So we created a bottom-up approach that is built around the engine that you need. Where today, they’re using neural net processors — we have the next evolution of AI computer engines.”

As Tanach noted, the result of this should be a system that — in real-world use cases that include not just synthetic benchmarks of the accelerator but also the rest of the overall architecture — offer 15 times the performance per dollar for basic deep learning offloading and far more once you offload the entire pipeline to its platform.

NeuReality is still in its early days, and while the team has working prototypes now, based on a Xilinx FPGA, it expects to be able to offer its fully custom hardware solution early next year. As its customers, NeuReality is targeting the large cloud providers, but also data center and software solutions providers like WWT to help them provide specific vertical solutions for problems like fraud detection, as well as OEMs and ODMs.

Tanach tells me that the team’s work with Xilinx created the groundwork for its custom chip — though building that (and likely on an advanced node), will cost money, so he’s already thinking about raising the next round of funding for that.

“We are already consuming huge amounts of AI in our day-to-day life and it will continue to grow exponentially over the next five years,” said Tanach. “In order to make AI accessible to every organization, we must build affordable infrastructure that will allow innovators to deploy AI-based applications that cure diseases, improve public safety and enhance education. NeuReality’s technology will support that growth while making the world smarter, cleaner and safer for everyone. The cost of the AI infrastructure and AIaaS will no longer be limiting factors.”

NeuReality team. Photo credit - NeuReality

Image Credits: NeuReality

#artificial-intelligence, #cardumen-capital, #computing, #ethernet, #fpga, #funding, #fundings-exits, #habana-labs, #hardware-startup, #intel, #mellanox, #ml, #neureality, #nvidia, #ourcrowd, #recent-funding, #science-and-technology, #startups, #tc, #technology, #varana-capital, #xilinx

0

Nvidia’s next laptop GPU generation powers a leap to 1440p displays

If you’ve been wondering when gaming laptops would begin a more serious push to 1440p panels, this week’s CES reveals from Nvidia are aimed directly at you. Behold: a generational jump in the company’s laptop-minded GPUs, this time with Ampere architecture and RTX 3000-series branding.

Three GPU models have been announced in all, and they’re named after the GeForce RTX 3080, 3070, and 3060. They are slated to roll out in “70+” laptop models starting January 26. Nvidia has listed “top OEMs” like Acer, Alienware, ASUS, Gigabyte, HP, Lenovo, MSI, and Razer with upcoming RTX 3000-series laptops, along with “local OEMs and system builders.”

Naming convention double-check

Nvidia’s sales pitch positions the RTX 3060 laptop variant as “faster than laptops featuring the RTX 2080 Super,” though this model may land more specifically in 1080p systems. The two higher-end models are frequently referred to as part of 1440p systems, a resolution that has long been left in the gaming-laptop cold (and will arguably benefit hugely from Nvidia’s proprietary DLSS upscaling solution). While Nvidia’s latest promotional materials mention a bang-for-the-buck upgrade compared to the last generation of laptop GPUs, we’re still waiting to see OEMs roll out specific prices and specs for their late-January models. (Also, we’re wondering if those laptops will sell out too quickly for average humans to get them.)

Read 6 remaining paragraphs | Comments

#gaming-culture, #gaming-laptops, #nvidia, #nvidia-rtx

0

Will startup valuations change given rising antitrust concerns?

The United States has, over the past few decades, been extremely lenient on antitrust enforcement, rarely blocking deals, even with overseas competitors. Yet, there have been inklings that things are changing. Yesterday, we learned that Visa and Plaid called off their combination after the Department of Justice sued to block it in early November. We also learned a week ago that shaving startup Billie would end its proposed acquisition by consumer product goods giant P&G after the Federal Trade Commission sued to block it in December.

Many, many, many other deals of course get through the gauntlet of regulations, but even a few smoke signals is enough to start raising concerns. That new calculus is even before we start to look at the morass of reforms being proposed around antitrust in Washington DC these days, nearly all of which — on a bipartisan basis — would create stricter controls for antitrust, particularly in critical technology industries and information services.

So, what’s the valuation prognosis for startups these days given that one of the most important exit options available is increasingly looking fraught?

#antitrust, #arm-holdings, #billie, #nvidia, #plaid, #qualcomm, #startup-valuations, #tc, #visa

0

UK’s markets regulator asks for views on Nvidia-Arm

The UK’s competition and markets regulator is seeking views on Nvidia’s takeover of Arm Holdings as it prepares to kick off formal oversight of potential competition impacts of the deal.

The US-based chipmaker’s $40BN purchase of the UK-based chip designer, announced last September, has triggered a range of domestic concerns — over the impact on UK jobs, industrial strategy/economic sovereignty and even national security — although the Competition and Markets Authority (CMA)’s probe will focus solely on possible competition-related impacts.

It said today that the probe will likely to consider whether, post-acquisition, Arm would have an incentive to “withdraw, raise prices or reduce the quality of its IP licensing services to Nvidia’s rivals”, per a press release.

The CMA is inviting interested third parties to comment on the acquisition before January 27 — ahead of the launch of its formal probe. That phase 1 investigation will include additional opportunities for external comment, according to the regulator, which has not yet provided a date for when it will take a decision on the acquisition.

Further details can be found on its case page — here.

Commenting in a statement, Andrea Coscelli, the CMA’s chief executive, said: “The chip technology industry is worth billions and critical to many of the products that we use most in our everyday lives. We will work closely with other competition authorities around the world to carefully consider the impact of the deal and ensure that it doesn’t ultimately result in consumers facing more expensive or lower quality products.”

Among those sounding the alarm about the impact on the UK of an Nvidia-Arm takeover is the original founder of the company, Hermann Hauser.

In September he wrote to the prime minister saying he’s “extremely concerned” about the impact on UK jobs, Arm’s business model and the future of the country’s economic sovereignty.

A website Hauser set up to gather signatures of objection — called savearm.co.uk — states that more than 2,000 signatures had been collected as of October 12.

As well as the CMA, a number of other international regulators will be scrutinizing the deal, with Nvidia saying in September that it expected the clearance process to take 1.5 years.

It has sought to preempt UK concerns, saying it will double down on the country as a core part of its engineering efforts by expanding Arm’s offices in Cambridge — where it said it would establish “a new global center of excellence in AI research”.

On wider national security concerns that are being attached to the Nvidia-Arm deal from some quarters, the CMA noted that the UK government could choose to issue a public interest intervention notice “if appropriate”.

Arm was earlier bought by Japan’s SoftBank for around $31BN back in 2016.

Its subsequent deal to offload the chip designer to Nvidia is a mixture of cash and stock — and included an immediate $2BN cash payment to SoftBank. But the majority of the transaction’s value is due to be paid in Nvidia stock at close of the deal, pending regulatory clearances.

#arm-holdings, #chip-market, #cma, #competition, #consolidation, #europe, #hardware, #nvidia

0

Grand theft GPU: $340,000 worth of RTX 3090s “fell off a truck” in China

A photo of a box truck has been photoshopped to include The Grinch stealing a computer component from it.

Enlarge / The GPU Grinch doesn’t care about your lists or whether you’ve been naughty or nice. (credit: Aurich Lawson / Dr. Seuss / GettyImages)

Some time last week, thieves stole a large number of Nvidia-based RTX 3090 graphics cards from MSI’s factory in mainland China. The news comes from Twitter user @GoFlying8, who posted what appears to be an official MSI internal document about the theft this morning, along with commentary from a Chinese language website.

Roughly translated—in other words, OCR scanned, run through Google Translate, and with the nastiest edges sawn off by yours truly—the MSI document reads something like this:

Ensmai Electronics (Deep) Co., Ltd.
Announcement
Memo No. 1-20-12-4-000074
Subject: Regarding the report theft of the graphics card, it is appropriate to reward.

Explanation:

  1. Recently, high unit price display cards produced by the company have been stolen by criminals. The case has now been reported to the police. At the same time, I also hope that all employees of the company will actively and truthfully report this case.
  2. Anyone providing information which solves this case will receive a reward of 100,000 yuan. The company promises to keep the identity of the whistleblower strictly confidential.
  3. If any person is involved in the case, from the date of the public announcement, report to the company’s audit department or the head of the conflicting department. If the report is truthful and and assists in the recovery of the missing items, the company will report to the police but request leniency. The law should be dealt with seriously.
  4. With this announcement, I urge my colleagues to be professional and ethical, and to be disciplined, learn from cases, and be warned.
  5. Reporting Tel: [elided]

Reporting mailbox of the Audit Office: [elided]
December 4, 2020

There has been some confusion surrounding the theft in English-speaking tech media; the MSI document itself dates to last Friday and does not detail how many cards were stolen or what the total value was. The surrounding commentary—from what seems to be a Chinese news app—claims that the theft was about 40 containers of RTX 3090 cards, at a total value of about 2.2 million renminbi ($336K in US dollars).

Read 1 remaining paragraphs | Comments

#gpu, #nvidia, #rtx-3090, #tech, #theft

0

Apple reportedly testing Intel-beating high core count Apple Silicon chips for high-end Macs

Apple is reportedly developing a number of Apple Silicon chip variants with significantly higher core counts relative to the M1 chips that it uses in today’s MacBook Air, MacBook Pro and Mac mini computers based on its own ARM processor designs. According to Bloomberg, the new chips include designs that have 16 power cores and hour high-efficiency cores, intended for future iMacs and more powerful MacBook Pro models, as well as a 32-performance core top-end version that would eventually power the first Apple Silicon Mac Pro.

The current M1 Mac has four performance cores, along with four high-efficiency cores. It also uses either seven or eight dedicated graphics cores, depending on the Mac model. Apple’s next-gen chips could leap right to 16 performance cores, or Bloomberg says they could opt to use eight or 12-core versions of the same, depending primarily on what kinds of yields they see from manufacturing processes. Chipmaking, particularly in the early stages of new designs, often has error rates that render a number of the cores on each new chip unusable, so manufacturers often just ‘bin’ those chips, offering them to the market as lower max core count designs until manufacturing success rates improve.

Apple’s M1 system on a chip.

Regardless of whether next-gen Apple Silicon Macs use 16, 12 or eight-performance core designs, they should provide ample competition for their Intel equivalents. Apple’s debut M1 line has won the praise of critics and reviewers for significant performance benefits over not only their predecessors, but also much more expensive and powerful Mac powered by higher-end Intel chips.

The report also says that Apple is developing new graphics processors that include both 16- and 32-core designs for future iMacs and pro notebooks, and that it even has 64- and 128-core designs in development for use in high-end pro machines like the Mac Pro. These should offer performance that can rival even dedicated GPU designs from Nvidia and AMD for some applications, though they aren’t likely to appear in any shipping machines before either late 2021 or 2022 according to the report.

Apple has said from the start that it plans to transition its entire line to its own Apple Silicon processors by 2022. The M1 Macs now available are the first generation, and Apple has begun with its lowest-power dedicated Macs, with a chip design that hews closely to the design of the top-end A-series chips that power its iPhone and iPad line. Next-generation M-series chips look like they’ll be further differentiated from Apple’s mobile processors, with significant performance advantages to handle the needs of demanding professional workloads.

#amd, #apple, #apple-inc, #apple-silicon, #computers, #computing, #gadgets, #hardware, #imac, #intel, #ipad, #iphone, #m1, #macbook, #macintosh, #nvidia, #steve-jobs, #tc

0

“Demand will probably exceed supply”: Nvidia explains RTX 30 shortages

Speaking at the Credit Suisse digital financial services conference, Nvidia Chief Financial Officer Colette Kress addressed and partially explained the recent shortages of the company’s new RTX 30-series graphics cards like the GeForce RTX 3080 and 3070.

She confirmed that wafer shortages at chip supplier Samsung Foundry are a factor, as many have speculated, but also suggested there is more to it than that. Said Kress:

We do have supply constraints, and our supply constraints do expand past what we’re seeing in terms of wafers and silicon, but yes, some constraints are in substrates and components. We continue to work during the quarter on our supply, and we believe, though, that demand will probably exceed supply in Q4 for overall gaming.

She also said that it might be “a couple months” before Nvidia can catch up to demand, but qualified even that by saying “at this time, it’s really difficult for us to quantify.”

Read 3 remaining paragraphs | Comments

#nvida-rtx, #nvidia, #rtx, #rtx-3070, #rtx-3080, #tech

0

Google Stadia and GeForce Now are both coming to iOS as web apps

Google and Nvidia both had some news about their respective cloud gaming service today. Let’s start with Nvidia. GeForce Now is now available on the iPhone and the iPad as a web app. The company says it’s a beta for now, but you can start using it by heading over to play.geforcenow.com on your iOS device.

GeForce Now is a cloud gaming service that works with your own game library. You can connect to your Steam, Epic and Ubisoft Connect accounts and play games you’ve already purchased on those third-party platforms — GOG support is coming soon. GeForce Now is also available on macOS, Android and Windows.

Game publishers have to opt in to appear on GeForce Now, which means that you won’t find your entire Steam library on the service. Still, the list is already quite long.

Right now, it costs $5 per month to access the Founders edition, which lets you play whenever you want and for as long as you want. It’s an introductory price, which means that Nvidia could raise prices in the future.

You can also try the service with a free account. You’re limited to one-hour sessions and less powerful hardware. There are also few slots. For instance, you have to wait 11 minutes to launch a game with a free account right now.

Once you add the web app to your iOS home screen, you can launch the service in full screen without the interface of Safari. You can connect a Bluetooth controller. Unfortunately, you can’t use a keyboard and a mouse.

The company says it is actively working with Epic Games on a touch-friendly version of Fortnite so that iOS players can play the game again. It could definitely boost usage on the service.

As for Google, the company issued an update 12 months after the launch of Stadia. Unlike GeForce Now, Stadia works more like a console. You have to buy games for the platform specifically. There are a hundred games on the platform including some games that you get with an optional Stadia Pro subscription.

The company says that iOS testing should start in the coming weeks. “This will be the first phase of our iOS progressive web application. As we test performance and add more features, your feedback will help us improve the Stadia experience for everyone. You can expect this feature to begin rolling out several weeks from now,” the company wrote.

#gaming, #geforce-now, #google, #google-stadia, #nvidia, #stadia

0

Nvidia developed a radically different way to compress video calls

Instead of transmitting an image for every frame, Maxine sends keypoint data that allows the receiving computer to re-create the face using a neural network.

Enlarge / Instead of transmitting an image for every frame, Maxine sends keypoint data that allows the receiving computer to re-create the face using a neural network. (credit: Nvidia)

Last month, Nvidia announced a new platform called Maxine that uses AI to enhance the performance and functionality of video conferencing software. The software uses a neural network to create a compact representation of a person’s face. This compact representation can then be sent across the network, where a second neural network reconstructs the original image—possibly with helpful modifications.

Nvidia says that its technique can reduce the bandwidth needs of video conferencing software by a factor of 10 compared to conventional compression techniques. It can also change how a person’s face is displayed. For example, if someone appears to be facing off-center due to the position of her camera, the software can rotate her face to look straight instead. Software can also replace someone’s real face with an animated avatar.

Maxine is a software development kit, not a consumer product. Nvidia is hoping third-party software developers will use Maxine to improve their own video conferencing software. And the software comes with an important limitation: the device receiving a video stream needs an Nvidia GPU with tensor core technology. To support devices without an appropriate graphics card, Nvidia recommends that video frames be generated in the cloud—an approach that may or may not work well in practice.

Read 27 remaining paragraphs | Comments

#generative-adversarial-networks, #machine-learning, #maxine, #neural-networks, #nvidia, #nvidia-maxine, #science, #tech

0

Amazon begins shifting Alexa’s cloud AI to its own silicon

Amazon engineers discuss the migration of 80 percent of Alexa’s workload to Inferentia ASICs in this three-minute clip.

On Thursday, an Amazon AWS blogpost announced that the company has moved most of the cloud processing for its Alexa personal assistant off of Nvidia GPUs and onto its own Inferentia Application Specific Integrated Circuit (ASIC). Amazon dev Sebastien Stormacq describes the Inferentia’s hardware design as follows:

AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

When an Amazon customer—usually someone who owns an Echo or Echo dot—makes use of the Alexa personal assistant, very little of the processing is done on the device itself. The workload for a typical Alexa request looks something like this:

  1. A human speaks to an Amazon Echo, saying: “Alexa, what’s the special ingredient in Earl Grey tea?”
  2. The Echo detects the wake word—Alexa—using its own on-board processing
  3. The Echo streams the request to Amazon data centers
  4. Within the Amazon data center, the voice stream is converted to phonemes (Inference AI workload)
  5. Still in the data center, phonemes are converted to words (Inference AI workload)
  6. Words are assembled into phrases (Inference AI workload)
  7. Phrases are distilled into intent (Inference AI workload)
  8. Intent is routed to an appropriate fulfillment service, which returns a response as a JSON document
  9. JSON document is parsed, including text for Alexa’s reply
  10. Text form of Alexa’s reply is converted into natural-sounding speech (Inference AI workload)
  11. Natural speech audio is streamed back to the Echo device for playback—”It’s bergamot orange oil.”

As you can see, almost all of the actual work done in fulfilling an Alexa request happens in the cloud—not in an Echo or Echo Dot device itself. And the vast majority of that cloud work is performed not by traditional if-then logic but inference—which is the answer-providing side of neural network processing.

Read 2 remaining paragraphs | Comments

#ai, #amazon, #aws, #gpu, #inference, #inferentia, #machine-learning, #nvidia, #tech, #uncategorized

0

Nvidia reportedly bringing Fortnite back to iOS through its cloud gaming service

Nvidia is bringing Fortnite back to iPhones and iPads, according to a report from the BBC.

The British news service is reporting that Nvidia has developed a version of its GeForce cloud gaming service that runs on Safari.

The development means that Fortnite gamers can play the Epic Games title off of servers run by Nvidia. What’s not clear is whether the cloud gaming service will mean significant lag times for players that could effect their gameplay.

Apple customers have been unable to download new versions of Epic Games’ marquee title after the North Carolina-based company circumvented Apple’s rules around in-game payments.

Revenues and rules are at the center of the conflict between Epic and Apple. Epic had developed an in-game marketplace where transactions were not subject to the 30% charges that Apple places on transactions conducted through its platform.

The maneuver was a clear violation of Apple’s terms of service, but Epic is arguing that the rules themselves are unfair and an example of Apple’s monopolistic hold over distribution of applications on its platform.

The ongoing legal dispute won’t even see the inside of a courtroom until May and it could be years before the lawsuit is resolved.

That’s going to create a lot of hassles for the nearly 116 million iOS Fortnite players, especially for the 73 million players that only use Apple products to access the game, according to the BBC report.

Unlike Android, Apple does not allow games or other apps to be loaded on to its phones or tablets via app stores other than its own.

Nvidia already offers its GeForce gaming service for Mac, Windows, Android and Chromebook computers, but the new version will be available on Apple mobile devices as well, according to the BBC report.

If it moves ahead, Nvidia’s cloud gaming service would be the only one on the market to support iOS users. Neither Amazon’s Luna cloud-gaming platform, nor Google’s Stadia service carry Fortnite.

#android, #apple, #cloud-gaming, #epic-games, #fortnite, #geforce, #nvidia, #stadia, #tc, #video-games

0

AWS launches its next-gen GPU instances

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of NVIDIA’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.12xlarge instance, in AWS slang and the eight A100 GPUs are connected over NVIDIA’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for 1-year reserved instances and $11.57 for three-year reserved ones.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your a model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

#amazon-web-services, #artificial-intelligence, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #ge-healthcare, #gpgpu, #gpu, #intel, #machine-learning, #nvidia, #toyota-research-institute

0

Nvidia RTX 3070 review: AMD’s stopwatch just started ticking a lot louder

Talking about the RTX 3070, Nvidia’s latest $499 GPU launching Thursday, October 29, is tricky in terms of the timing of today’s review embargo. As of right now, the RTX 3070 is the finest GPU in this price sector by a large margin. In 24 hours, that could change—perhaps drastically.

Ahead of AMD’s big October 28 event, dedicated to its RDNA 2 GPU line, Nvidia gave us an RTX 3070 Founders Edition to test however we saw fit. This is the GPU Nvidia absolutely needed to reveal before AMD shows up in (expectedly) the same price and power range.

Inside of an Nvidia-only bubble, this new GPU is a sensation. Pretty much every major RTX 2000-series card overshot with proprietary promises instead of offering brute force worth its inflated costs. Yet without AMD nipping at its heels, Nvidia’s annoying strategy seemed to be the right call: the company established the RTX series’ exclusive bonus processing cores as a major industry option without opposition, then got to wait a full year before competing with significant power jumps and delectable price cuts.

Read 26 remaining paragraphs | Comments

#features, #gaming-culture, #graphics-card, #graphics-cards, #nvidia, #nvidia-rtx

0