How to meet the demand of EV infrastructure and maintain a stable grid

As electric vehicles (EVs) become the new standard, charging infrastructure will become a commonplace detail blending into the landscape, available in a host of places from a range of providers: privately run charging stations, the office parking lot, home garages and government-provided locations to fill in the gaps. We need a new energy blueprint for the United States in order to maintain a stable grid to support this national move to EV charging.

The Biden administration announced 500,000 charging stations to be installed nationally and additional energy storage to facilitate the shift to EVs. Integrating all of this new infrastructure and transitioning requires balancing the traffic on the grid and managing increased energy demand that stretches beyond power lines and storage itself.

The majority of EV infrastructure pulls its power from the grid, which will add significant demand when it reaches scale. In an ideal situation, EV charging stations will have their own renewable power generation co-located with storage, but new programs and solutions are needed in order to make it available everywhere. A range of scenarios for how renewables can be used to power EV charging have been piloted in the U.S. in recent years. Eventually, EVs will likely even provide power to the grid.

These technological advances will happen as we progress through the energy transition; regardless, EV infrastructure will heavily rely on the U.S. grid. That makes coordination across a range of stakeholders and behavior change among the general public essential for keeping the grid stable while meeting energy demand.

The White House’s fact sheet for EV charging infrastructure points to a technical blueprint that the Department of Energy and the Electric Power Research Institute will be working on together. It is critical that utilities, energy management and storage stakeholders, and the general public be included in planning — here’s why.

Stakeholder collaboration

Charging infrastructure is currently fragmented in the U.S. Much of it is privatized and there are complaints that unless you drive a Tesla, it is hard to find charging while on the road. Some EV owners have even returned to driving gas-powered vehicles. There’s reason to be hopeful that this will rapidly change.

ChargePoint and EVgo are two companies that will likely become household names as their EV networks expand. A coalition made up of some of the largest U.S. utilities — including American Electric Power, Dominion Energy, Duke Energy, Entergy, Southern Company and the Tennessee Valley Authority — called the Electric Highway Coalition, announced plans for a regional network of charging stations spanning their utility territories.

Networks that swap out private gas stations for EV charging is one piece of the puzzle. We also need to ensure that everyone has affordable access and that charging times are staggered — this is one of the core concerns on every stakeholder’s mind. Having charging available in a range of places spreads out demand, helping keep power available and the grid balanced.

Varying consumer needs including location and housing, work schedules and economic situations require considerations and new solutions that make EVs and charging accessible to everyone. What works in the suburbs won’t suit rural or urban areas, and just imagine someone who works the night shift in a dense urban area.

Biden’s plan includes, “$4 million to encourage strong partnerships and new programs to increase workplace charging regionally or nationally, which will help increase the feasibility of [plug-in electric vehicle] ownership for consumers in underserved communities.” Partnerships and creative solutions will equally be needed.

An opportunity to fully engage technologies we already have

“Fifty percent of the reductions we have to make to get to net-zero by 2050 or 2045 are going to come from technologies that we don’t yet have,” John Kerry said recently, causing a stir. He later clarified that we also have technologies now that we need to put to work, which received less air time. In reality, we are just getting started in utilizing existing renewable and energy transition technologies; we have yet to realize their full potential.

Currently, utility-scale and distributed energy storage are used for their most simplistic capabilities, that is, jumping in when energy demand reaches its peak and helping keep the grid stable through services referred to as balancing and frequency regulation. But as renewable energy penetration increases and loads such as EVs are electrified, peak demand will be exacerbated.

The role that storage plays for EV charging stations seems well understood. On-site storage is used daily to provide power for charging cars at any given time. Utility-scale storage has the same capabilities and can be used to store and then supply renewable power to the grid in large quantities every day to help balance the demand of EVs.

A stable power system for EVs combines utilities and utility-scale storage with a network of subsystems where energy storage is co-located with EV charging. All of the systems are coordinated and synchronized to gather and dispatch energy at different times of the day based on all the factors that affect grid stability and the availability of renewable power. That synchronization is handled by intelligent energy management software that relies on sophisticated algorithms to forecast and respond to changes within fractions of a second.

This model also makes it possible to manage the cost of electricity and EV demand on the grid. Those subsystems could be municipal-owned locations in lower-income areas. Such a subsystem would collect power in its storage asset and set the price locally on its own terms. These systems could incentivize residents to power up there at certain times of the day in order to make charging more affordable by providing an alternative to the real-time cost of electricity during peak demand when using a home outlet, for example.

Behavior change

The greatest challenge for utilities will be how to manage EV loads and motivate people to stagger charging their vehicles, rather than everyone waiting until they are home in the evening during off-peak renewable generation periods. If everyone plugged in at the same time, we’d end up cooking dinner in the dark.

While there’s been talk of incentivizing the public to charge at different times and spread out demand, motivators vary among demographics. With the ability to charge at home and skip a trip to the “gas station” — or “power station,” as it may be referred to in the future — many people will choose convenience over cost.

The way we currently operate, individual energy usage seems like an independent, isolated event to consumers and households. EVs will require everyone — from utilities and private charging stations to consumers — to be more aware of demand on the grid and act more as communities sharing energy.

Thus, a diverse charging network alone won’t solve the issue of overtaxing the grid. A combination of a new blueprint for managing energy on the grid plus behavior change is needed.

#automotive, #charging-station, #column, #electric-vehicles, #energy, #energy-management, #energy-storage, #opinion, #tc, #transportation, #vehicle-to-grid

1 change that can fix the VC funding crisis for women founders

The venture capital industry as we know it is broken. At least for women, that is.

In terms of funding to women founders, 2020 was among the worst years on record. On a global level, only 9% of all funds deployed to technology startups went to founding teams that included at least one woman. Solo woman founders and all-women teams raised just 2% of all VC dollars, Crunchbase data showed.

Shockingly, this number is actually less than it was when we first started counting a decade ago, well before many high-profile diversity initiatives launched with the goal of fixing this very problem.

This funding gap isn’t just a moral crisis — it’s an economic one. The lack of investment into women-founded startups is a missed opportunity worth trillions of dollars. That’s because of overwhelming evidence that startups founded by women outperform startups founded by men: They generate more revenue, earn higher profits and exit faster at higher valuations. And they do all this while raising way less money.

What we’re doing isn’t working. Through research for my next book on women founders and funders, I kept asking myself the same question: When it comes to fixing the funding gap for women founders, what’s the one thing we can do that will make everything else easier or unnecessary?

I now believe that our best bet for long-term change is to focus our efforts on increasing the number of women investing partners who can write large seed checks. Here’s why.

Women investors are up to 3x more likely to fund women founders

Recently, one of the top VCs in the world told me how challenging it is to diversify his senior team. He expressed it as an accepted fact and a widespread belief. This is a common trope in Silicon Valley: Everyone wants gender diversity, but it’s so hard to find all the senior women!

In the venture capital industry, who you hire at the senior level is who you hang out with. And who you hire at the senior level determines who your fund will back.

Since studies now show that women investors are up to three times more likely to invest in women founders, it is clear that the fastest way to fund more women is to hire more women investing partners with check-writing ability. The effect to venture firms? Returns.

“When U.S. VC firms increased the proportion of female partners, they benefited with 9.7% more profitable exits and a 1.5% spike in overall fund returns annually,” explained Lisa Stone of WestRiver Group.

Data from All Raise and PitchBook reinforce the “correlation between hiring female decision-makers at the investment level and outperformance at the fund level,” adding that “69.2% of U.S. VCs that scored a top-quartile fund between 2009 and 2018 had women in decision-making roles.”

It shouldn’t be surprising that women investors are more likely to invest in women founders. That’s because humans have a propensity toward homophily the tendency for like to attract like and for similarity to breed connection.

Homophily is why a vegan VC is more likely to invest in a vegan food tech, a gamer is more likely to hang out with gaming founders, or a parent is more likely to invest in a parent marketplace. People gravitate toward what they know.

Deena Shakir, who happens to be a woman and a mother, recently led Lux Capital’s investment into women’s health unicorn Maven. Shakir had multiple high-risk pregnancies with multiple complications, emergency C-sections, NICU stays and breastfeeding challenges.

“It is no coincidence that I am joined on Maven’s board of directors by four other mothers … and a brand-new father … whose personal journeys have also informed their professional conviction,” Shakir wrote in a Medium post.

Why seed checks have the greatest impact on the ecosystem

I believe that to fix the funding gap for women founders and jump-start the virtuous cycle of venture capital investing into women, we should focus on getting more seed checks into the hands of women founders. That’s because seed investing is a leading indicator of whether we are headed in the right direction in terms of closing the funding gap for women, according to Jenny Lefcourt, a partner at Freestyle and co-founder of All Raise, the leading nonprofit focused on diversifying the VC industry.

This doesn’t discount the importance of investments made into women founders at later stages. When a women founder lands Series D capital, it boosts this year’s numbers into women founders and likely brings that particular founder closer to a liquidity event that will lead her (and her executives) to invest in more women.

That said, the greatest impact on the future ecosystem will come from widening the top of the funnel and giving more women at the seed stage the shot to one day reach a momentous Series D funding like Maven. After all, who we fund now becomes who we fund later.

Why large seed checks matter most

Finally, the size of the check is also important when thinking about how to have the biggest impact on the ecosystem.

I know first-hand that microchecks are critical to building an inclusive ecosystem. When women invest at the seed level — in any amount — they jumpstart a virtuous cycle of women funding women. That’s why when I stepped in to lend a hand at my portfolio company when the solo woman founder took a parental leave, one of my key projects was to develop Jefa House, a way for Jefa’s own executives to easily invest in other women-founded startups.

That said, large party rounds made up entirely of small angel checks are few and far between. Similar challenges face small checks from emerging fund managers. Although the sheer number of emerging managers has increased 9x in seven years, the reality is that most emerging managers simply don’t have much money.

Are women venture capitalists who run their own microfunds more likely to invest in amazing women founders than Tier 1 funds with few or no women investing partners? Yes. Will it take them a long time to compete with those Tier 1 funds in terms of check size? Yes.

This is why it matters so much when leading funds hire or promote women to the partner level. Not only does it give women founders a better shot at funding from high-signal shops, but the moves that top funds make are key signals to others in the ecosystem: In venture capital, women investors don’t have to sit at the kids’ table.

Why we must hire women investing partners

We all know that great returns in early-stage venture capital come from making big bets on great ideas that others aren’t betting on. That is why VC investing is contrarian by definition. Thanks to our increasingly globalized world and clear data showing the importance of diverse teams to make good decisions to get those returns, no one in 2021 truly believes that single white dudes in Palo Alto have a monopoly on billion-dollar ideas.

However, due to the nature of homophily, venture capital remains a highly homogenous industry, and the social and economic interactions and decisions of human beings remain deeply swayed by these principles. No matter how much work we do, birds of a feather really do flock — and fund — together.

This all leads to one place: The clearest path to funding different kinds of founders with different kinds of ideas is to put different kinds of investors on the investing side of the table. To get more funding to women founders, we need more women who can write checks. That’s why prioritizing the hiring of women investing partners who can write large seed checks is key to fixing the funding crisis for women founders and increasing VC returns worldwide.

#column, #deena-shakir, #funding, #jenny-lefcourt, #maven, #opinion, #private-equity, #startup-company, #tc, #venture-capital, #women-in-venture-capital

For the love of the loot: Blockchain, the metaverse and gaming’s blind spot

The speed at which gaming has proliferated is matched only by the pace of new buzzwords inundating the ecosystem. Marketers and decision makers, already suffering from FOMO about opportunities within gaming, have latched onto buzzy trends like the applications of blockchain in gaming and the “metaverse” in an effort to get ahead of the trend rather than constantly play catch-up.

The allure is obvious, as the relationship between the blockchain, metaverse, and gaming makes sense. Gaming has always been on the forefront of digital ownership (one can credit gaming platform Steam for normalizing the concept for games, and arguably other media such as movies), and most agreed upon visions of the metaverse rely upon virtual environments common in games with decentralized digital ownership.

Whatever your opinion of either, I believe they both have an interrelated future in gaming. However, the success or relevance of either of these buzzy topics is dependent upon a crucial step that is being skipped at this point.

Let’s start with the example of blockchain and, more specifically, NFTs. Collecting items of varying rarities and often random distribution form some of the core “loops” in many games (i.e. kill monster, get better weapon, kill tougher monster, get even better weapon, etc.), and collecting “skins” (e.g. different outfits/permutation of game character) is one of the most embraced paradigms of micro-transactions in games.

The way NFTs are currently being discussed in relation to gaming are very much in danger of falling into this very trap: Killing the core gameplay loop via a financial fast track.

Now, NFTs are positioned to be a natural fit with various rare items having permanent, trackable, and open value. Recent releases such as “Loot (for Adventurers)” have introduced a novel approach wherein the NFTs are simply descriptions of fantasy-inspired gear and offered in a way that other creators can use them as tools to build worlds around. It’s not hard to imagine a game built around NFT items, à la Loot.

But that’s been done before… kind of. Developers of games with a “loot loop” like the one described above have long had a problem with “farmers”, who acquire game currencies and items to sell to players for real money, against the terms of service of the game. The solution was to implement in-game “auction houses” where players could instead use real money to purchase items from one another.

Unfortunately, this had an unwanted side-effect. As noted by renowned game psychologist Jamie Madigan, our brains are evolved to pay special attention to rewards that are both unexpected and beneficial. When much of the joy in some games comes from an unexpected or randomized reward, being able to easily acquire a known reward with real money robbed the game of what made it fun.

The way NFTs are currently being discussed in relation to gaming are very much in danger of falling into this very trap: Killing the core gameplay loop via a financial fast track. The most extreme examples of this phenomena commit the biggest cardinal sin in gaming — a game that is “pay to win,” where a player with a big bankroll can acquire a material advantage in a competitive game.

Blockchain games such as Axie Infinity have rapidly increased enthusiasm around the concept of “play to earn,” where players can potentially earn money by selling tokenized resources or characters earned within a blockchain game environment. If this sounds like a scenario that can come dangerously close to “pay to win,” that’s because it is.

What is less clear is whether it matters in this context. Does anyone care enough about the core game itself rather than the potential market value of NFTs or earning potential through playing? More fundamentally, if real-world earnings are the point, is it truly a game or just a gamified micro-economy, where “farming” as described above is not an illicit activity, but rather the core game mechanic?

The technology culture around blockchain has elevated solving for very hard problems that very few people care about. The solution (like many problems in tech) involves reevaluation from a more humanist approach. In the case of gaming, there are some fundamental gameplay and game psychology issues to be tackled before these technologies can gain mainstream traction.

We can turn to the metaverse for a related example. Even if you aren’t particularly interested in gaming, you’ve almost certainly heard of the concept after Mark Zuckerberg staked the future of Facebook upon it. For all the excitement, the fundamental issue is that it simply doesn’t exist, and the closest analogs are massive digital game spaces (such as Fortnite) or sandboxes (such as Roblox). Yet, many brands and marketers who haven’t really done the work to understand gaming are trying to fast-track to an opportunity that isn’t likely to materialize for a long time.

Gaming can be seen as the training wheels for the metaverse — the ways we communicate within, navigate, and think about virtual spaces are all based upon mechanics and systems with foundations in gaming. I’d go so far as to predict the first adopters of any “metaverse” will indeed be gamers who have honed these skills and find themselves comfortable within virtual environments.

By now, you might be seeing a pattern: We’re far more interested in the “future” applications of gaming without having much of a perspective on the “now” of gaming. Game scholarship has proliferated since the early aughts due to a recognition of how games were influencing thought in fields ranging from sociology to medicine, and yet the business world hasn’t paid it much attention until recently.

The result is that marketers and decision makers are doing what they do best (chasing the next big thing) without the usual history of why said thing should be big, or what to do with it when they get there. The growth of gaming has yielded an immense opportunity, but the sophistication of the conversations around these possibilities remains stunted, due in part to our misdirected attention.

There is no “pay to win” fast track out of this blind spot. We have to put in the work to win.

#blockchain, #column, #cryptocurrencies, #cryptocurrency, #facebook, #gaming, #loot, #mark-zuckerberg, #metaverse, #nfts, #opinion, #roblox, #startups, #virtual-reality

The FDA should regulate Instagram’s algorithm as a drug

The Wall Street Journal on Tuesday reported Silicon Valley’s worst-kept secret: Instagram harms teens’ mental health; in fact, its impact is so negative that it introduces suicidal thoughts.

Thirty-two percent of teen girls who feel bad about their bodies report that Instagram makes them feel worse. Of teens with suicidal thoughts, 13% of British and 6% of American users trace those thoughts to Instagram, the WSJ report said. This is Facebook’s internal data. The truth is surely worse.

President Theodore Roosevelt and Congress formed the Food and Drug Administration in 1906 precisely because Big Food and Big Pharma failed to protect the general welfare. As its executives parade at the Met Gala in celebration of the unattainable 0.01% of lifestyles and bodies that we mere mortals will never achieve, Instagram’s unwillingness to do what is right is a clarion call for regulation: The FDA must assert its codified right to regulate the algorithm powering the drug of Instagram.

The FDA should consider algorithms a drug impacting our nation’s mental health: The Federal Food, Drug and Cosmetic Act gives the FDA the right to regulate drugs, defining drugs in part as “articles (other than food) intended to affect the structure or any function of the body of man or other animals.” Instagram’s internal data shows its technology is an article that alters our brains. If this effort fails, Congress and President Joe Biden should create a mental health FDA.

Researchers can study what Facebook prioritizes and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

The public needs to understand what Facebook and Instagram’s algorithms prioritize. Our government is equipped to study clinical trials of products that can physically harm the public. Researchers can study what Facebook privileges and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

In November 2020, as Cecilia Kang and Sheera Frenkel report in “An Ugly Truth,” Facebook made an emergency change to its News Feed, putting more emphasis on “News Ecosystem Quality” scores (NEQs). High NEQ sources were trustworthy sources; low were untrustworthy. Facebook altered the algorithm to privilege high NEQ scores. As a result, for five days around the election, users saw a “nicer News Feed” with less fake news and fewer conspiracy theories. But Mark Zuckerberg reversed this change because it led to less engagement and could cause a conservative backlash. The public suffered for it.

Facebook likewise has studied what happens when the algorithm privileges content that is “good for the world” over content that is “bad for the world.” Lo and behold, engagement decreases. Facebook knows that its algorithm has a remarkable impact on the minds of the American public. How can the government let one man decide the standard based on his business imperatives, not the general welfare?

Upton Sinclair memorably uncovered dangerous abuses in “The Jungle,” which led to a public outcry. The free market failed. Consumers needed protection. The 1906 Pure Food and Drug Act for the first time promulgated safety standards, regulating consumable goods impacting our physical health. Today, we need to regulate the algorithms that impact our mental health. Teen depression has risen alarmingly since 2007. Likewise, suicide among those 10 to 24 is up nearly 60% between 2007 and 2018.

It is of course impossible to prove that social media is solely responsible for this increase, but it is absurd to argue it has not contributed. Filter bubbles distort our views and make them more extreme. Bullying online is easier and constant. Regulators must audit the algorithm and question Facebook’s choices.

When it comes to the biggest issue Facebook poses — what the product does to us — regulators have struggled to articulate the problem. Section 230 is correct in its intent and application; the internet cannot function if platforms are liable for every user utterance. And a private company like Facebook loses the trust of its community if it applies arbitrary rules that target users based on their background or political beliefs. Facebook as a company has no explicit duty to uphold the First Amendment, but public perception of its fairness is essential to the brand.

Thus, Zuckerberg has equivocated over the years before belatedly banning Holocaust deniers, Donald Trump, anti-vaccine activists and other bad actors. Deciding what speech is privileged or allowed on its platform, Facebook will always be too slow to react, overcautious and ineffective. Zuckerberg cares only for engagement and growth. Our hearts and minds are caught in the balance.

The most frightening part of “The Ugly Truth,” the passage that got everyone in Silicon Valley talking, was the eponymous memo: Andrew “Boz” Bosworth’s 2016 “The Ugly.”

In the memo, Bosworth, Zuckerberg’s longtime deputy, writes:

So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.

Zuckerberg and Sheryl Sandberg made Bosworth walk back his statements when employees objected, but to outsiders, the memo represents the unvarnished id of Facebook, the ugly truth. Facebook’s monopoly, its stranglehold on our social and political fabric, its growth at all costs mantra of “connection,” is not de facto good. As Bosworth acknowledges, Facebook causes suicides and allows terrorists to organize. This much power concentrated in the hands of one corporation, run by one man, is a threat to our democracy and way of life.

Critics of FDA regulation of social media will claim this is a Big Brother invasion of our personal liberties. But what is the alternative? Why would it be bad for our government to demand that Facebook accounts to the public its internal calculations? Is it safe for the number of sessions, time spent and revenue growth to be the only results that matters? What about the collective mental health of the country and world?

Refusing to study the problem does not mean it does not exist. In the absence of action, we are left with a single man deciding what is right. What is the price we pay for “connection”? This is not up to Zuckerberg. The FDA should decide.

#column, #facebook, #food-and-drug-administration, #government, #health, #instagram, #joe-biden, #mark-zuckerberg, #opinion, #policy, #section-230, #sheryl-sandberg, #social, #social-media, #the-wall-street-journal

Beware the hidden bias behind TikTok resumes

Social media has served as a launchpad to success almost as long as it has been around. The stories of going viral from a self-produced YouTube video and then securing a record deal established the mythology of social media platforms. Ever since, social media has consistently gravitated away from text-based formats and toward visual mediums like video sharing.

For most people, a video on social media won’t be a ticket to stardom, but in recent months, there have been a growing number of stories of people getting hired based on videos posted to TikTok. Even LinkedIn has embraced video assets on user profiles with the recent addition of the “Cover Story” feature, which allows workers to supplement their profiles with a video about themselves.

As technology continues to evolve, is there room for a world where your primary resume is a video on TikTok? And if so, what kinds of unintended consequences and implications might this have on the workforce?

Why is TikTok trending for jobs?

In recent months, U.S. job openings have risen to an all-time high of 10.1 million. For the first time since the pandemic began, available jobs have exceeded available workers. Employers are struggling to attract qualified candidates to fill positions, and in that light, it makes sense that many recruiters are turning to social platforms like TikTok and video resumes to find talent.

But the scarcity of workers does not negate the importance of finding the right employee for a role. Especially important for recruiters is finding candidates with the skills that align with their business’ goals and strategy. For example, as more organizations embrace a data-driven approach to operating their business, they need more people with skills in analytics and machine learning to help them make sense of the data they collect.

Recruiters have proven to be open to innovation where it helps them find these new candidates. Recruiting is no longer the manual process it used to be, with HR teams sorting through stacks of paper resumes and formal cover letters to find the right candidate. They embraced the power of online connections as LinkedIn rose to prominence and even figured out how to use third-party job sites like GlassDoor to help them draw in promising candidates. On the back end, many recruiters use advanced cloud software to sort through incoming resumes to find the candidates that best match their job descriptions. But all of these methods still rely on the traditional text-based resume or profile as the core of any application.

Videos on social media provide the ability for candidates to demonstrate soft skills that may not be immediately apparent in written documents, such as verbal communication and presentation skills. They are also a way for recruiters to learn more about the personality of the candidate to determine how they’d fit into the culture of the company. While this may be appealing for many, are we ready for the consequences?

We’re not ready for the close-up

While innovation in recruiting is a big part of the future of work, the hype around TikTok and video resumes may actually take us backward. Despite offering a new way for candidates to market themselves for opportunities, it also carries potential pitfalls that candidates, recruiters and business leaders need to be aware of.

The very element that gives video resumes their potential also presents the biggest problems. Video inescapably highlights the person behind the skills and achievements. As recruiters form their first opinions about a candidate, they will be confronted with information they do not usually see until much later in the process, including whether they belong to protected classes because of their race, disability or gender.

Diversity, equity and inclusion (DE&I) concerns have had a major surge in attention over the last couple of years amid heightened awareness and scrutiny around how employers are — or are not — prioritizing diversity in the workplace.

But evaluating candidates through video could erase any progress made by introducing more opportunities for unconscious, or even conscious, bias. This could create a dangerous situation for businesses if they do not act carefully because it could open them up to consequences such as damage to their reputation or even something as severe as discrimination lawsuits.

A company with a poor track record for diversity may have the fact that they reviewed videos from candidates used against them in court. Recruiters reviewing the videos may not even be aware of how the race or gender of candidates are impacting their decisions. For that reason, many of the businesses I have seen implement an option for video in their recruiting flow do not allow their recruiters to watch the video until late in the recruiting process.

But even if businesses address the most pressing issues of DE&I by managing bias against those protected classes, by accepting videos there are still issues of diversity in less protected classes such as neurodiversity and socioeconomic status. A candidate with exemplary skills and a strong track record may not present themselves well through a video, coming across as awkward to the recruiter watching the video. Even if that impression is irrelevant to the job, it could still influence the recruiter’s stance on hiring.

Furthermore, candidates from affluent backgrounds may have access to better equipment and software to record and edit a compelling video resume. Other candidates may not, resulting in videos that may not look as polished or professional in the eyes of the recruiter. This creates yet another barrier to the opportunities they can access.

As we sit at an important crossroads in how we handle DE&I in the workplace, it is important for employers and recruiters to find ways to reduce bias in the processes they use to find and hire employees. While innovation is key to moving our industry forward, we have to ensure top priorities are not being compromised.

Not left on the cutting room floor

Despite all of these concerns, social media platforms — especially those based on video — have created new opportunities for users to expand their personal brands and connect with potential job opportunities. There is potential to use these new systems to benefit both job seekers and employers.

The first step is to ensure that there is always a place for a traditional text-based resume or profile in the recruiting process. Even if recruiters can get all the information they need about a candidate’s capabilities from video, some people will just naturally feel more comfortable staying off camera. Hiring processes need to be about letting people put their best foot forward, whether that is in writing or on video. And that includes accepting that the best foot to put forward may not be your own.

Instead, candidates and businesses should consider using videos as a place for past co-workers or managers to endorse the candidate. An outside endorsement can do a lot more good for an application than simply stating your own strengths because it shows that someone else believes in your capabilities, too.

Video resumes are hot right now because they are easier to make and share than ever and because businesses are in desperate need of strong talent. But before we get caught up in the novelty of this new way of sharing our credentials, we need to make sure that we are setting ourselves up for success.

The goal of any new recruiting technology should be to make it easier for candidates to find opportunities where they can shine without creating new barriers. There are some serious kinks to work out before video resumes can achieve that, and it is important for employers to consider the repercussions before they damage the success of their DE&I efforts.

#column, #diversity, #glassdoor, #human-resource-management, #labor, #linkedin, #opinion, #recruitment, #resume, #social, #social-media, #social-media-platforms, #startups, #tc, #tiktok

The network effect is anti-competitive

A U.S. federal judge last week struck down Apple rules restricting app developers from selling directly to customers outside the App Store.

Apple’s stock fell 3% on the news, which is being regarded as a win for small and midsize app developers because they’ll be able to build direct billing relationships with their customers. But Apple is just one of many Big Tech companies that dominate their sector.

The larger issue is how this development will impact Amazon, Facebook, Grubhub and other tech giants with online marketplaces that use draconian terms of service to keep their resellers subservient. The skirmish between Apple and small and midsize app developers is just a smaller battle in a much larger war.

App makers pay up to 30% on every sale they make on the Apple App Store. Resellers on Amazon pay a monthly subscription fee, a sales commission of 8% to 15%, fulfillment fees and other miscellaneous charges. Grubhub charges restaurants 15% of every order, a credit card processing fee, an order processing fee and a 10% delivery commission.

Like app developers, online resellers and social media influencers are all falling for the same big lie: that they can build a sustainable business with healthy margins on someone else’s platform. The reality is the App Store, online marketplaces and even social networks that dominate their sectors have the unilateral power to selectively deplatform and squeeze their users, and there’s not much to be done about it.

Healthy competition exists inside the App Store and among marketplace resellers and aspiring social media influencers. But no one seems to be talking about the real elephants in the room, which are the social networks and online marketplace providers themselves. In some respects, they’ve become almost like digital dictators with complete control over their territories.

It’s something every small and midsize business that gets excited about some new online service catering to their industry should be aware of because it directly impacts their ability to grow a stable business. The federal judge’s decision suggests the real goal in digital business is a direct billing relationship with the end user.

On the internet, those who are able to lead a horse to water and make them drink — outside the walled gardens of digital marketplace operators like Uber, Airbnb and Udemy — are the true contenders. In content and e-commerce, this is what most small and midsize companies don’t realize. Your own website or owned media, at a top-level domain that you control, is the only unfettered way to sell direct to end users.

Mobile app makers on Apple’s App Store, resellers on Amazon and aspiring content creators on Instagram, YouTube and TikTok are all subject to the absolute control of digital titans who are free to govern by their own rules with unchecked power.

For access to online marketplaces and social networks, we got a raw deal. We’re basically plowing their fields like digital sharecroppers. Resellers on Amazon are forced to split their harvest with a landlord who takes a gross percentage with no caps. Amassing followers on TikTok is building an audience that’s locked inside their venue.

These tech giants — all former startups that built their audiences from scratch — are free to impose and selectively enforce oppressive rules. If you’re a small fry, they can prohibit you from asking for your customer’s email address and deplatform you for skimming, but look the other way when Spotify and The New York Times do the same thing. Both were already selling direct and through the App Store prior to Friday’s ruling.

How is that competitive? Even after the ruling, Big Tech still gets to decide who they let violate their terms of service and who they deplatform. It’s not just their audience. It’s their universe, their governance, their rules and their enforcement.

In the 1948 court case United States v. Paramount Pictures, the Supreme Court ruled that film studios couldn’t own their own theaters because that meant they could exclusively control what movies were screened. They stifled competition by controlling what films made it to the marquee, so SCOTUS broke them up.

Today, social networks control what gets seen on their platforms, and with the push of a button, they can give the hook to whoever they want, whenever they want. The big challenge that the internet poses to capitalism is that the network effect is fundamentally anti-competitive. Winner-take-all markets dominated by tech giants look more like government-controlled than free-market economies.

On the one hand, the web gives us access to a global marketplace of buyers and sellers. On the other, a few major providers control the services that most people use to do business, because they don’t have the knowledge or resources to stand up a competitive website. But unless you have your own domain and good search visibility, you’re always in danger of being deplatformed and losing access to your customers or audience members with no practical recourse.

The network effect is such that once an online marketplace becomes dominant, it neutralizes the competitive market, because everyone gravitates to the dominant service to get the best deal. There’s an inherent conflict between the goals of a winner-takes-all tech company and the goals of a free market.

Dominant online marketplaces are only competitive for users. Meanwhile, marketplace providers operate with impunity. If they decide they want to use half-baked AI or offshore contractors to police their terms of service and shore up false positives, there’s no practical way for users to contest. How can Facebook possibly govern nearly 3 billion users judiciously with around 60,000 employees? As we’ve seen, it can’t.

For app makers, online resellers and creators, the only smart option is open source on the open web. Instead of relying on someone else’s audience (or software for that matter), you own your online destination powered by software like WordPress or Discord, and you never have to worry about getting squeezed when the founders go public or their platform gets bought by profit-hungry investment bankers. Only then can you protect your profit margins. And only then are the terms of service the laws of the land.

Politics aside, as former President Donald Trump’s deplatforming demonstrated, if you get kicked off Facebook and Twitter, there’s really nowhere else to go. If they want you out, it’s game over. It’s no coincidence Trump lost his Facebook and Twitter accounts on the same day the Republicans lost the Senate. If the GOP takes back the Senate, watch Trump get his social media accounts back. Social networks ward off regulators by appeasing the legislative majority.

So don’t get too excited about the new Amazon Influencer Program. If you want to build a sustainable digital business, you need an owned media presence powered by software that doesn’t rake commissions, have access to your customer contact information and has an audience that can’t be commandeered with an algorithm tweak.

#airbnb, #amazon, #apple, #apple-app-store, #apple-inc, #column, #e-commerce, #ecommerce, #facebook, #online-marketplace, #opinion, #social, #social-media, #social-networks, #tc, #tiktok, #uber

Gig workers with smartphones can help set infrastructure priorities

With all the focus on whether Congress will enact a major infrastructure law to rebuild the United States’ roads, bridges, railways, etc., nobody seems to be paying attention to the elephant in the room: Even if the legislation is passed, where do we begin? You might be surprised to learn that the gig economy has an app for that.

We can and should hire professional consultants and other experts to review our infrastructure systems to see what needs the most immediate attention, but the sheer number of roads, bridges, dams and other critical infrastructure in the U.S. makes the job of prioritizing daunting.

According to the American Society of Civil Engineers’ 2021 Report Card for America’s Infrastructure, there are over 4 million miles of public roads, 617,000 bridges, 91,000 dams and 140,000 rail miles in the U.S. These are massive statistics.

So as soon as an infrastructure bill passes, the big questions will be: Where do we begin, and how do we set priorities — expeditiously and at minimum cost, at least for the first step? The next step would be to bring in professional engineers and experts to begin the rebuilding process.

There are some obvious examples of infrastructure systems needing immediate, prioritized attention (see the Sidney Sherman Bridge in Houston, which had to be shut down a few years ago for a corroded bridge bearing and was recently classified as “structurally deficient”).

Fortunately, there is another massive statistic out there that can help: 216 million. That is the approximate number of U.S. adults that own a smartphone. Pew Research Center recently found that 85% of all U.S. adults own a smartphone, which, needless to say, is the highest it’s ever been. Even enlisting just a small percentage of the 216 million smartphone users out there can help immensely with this task.

Federal, state and local governments can and should consider the awesome (and relatively inexpensive) power of our smartphones and the gig economy. Gig workers can be enlisted to use the smartphones that they already own to provide inspection data and photographs of the key identified roads, bridges, dams and rails in the 50 states. The data and photos they collect can then be instantly transmitted to a national database for review and evaluation by professional engineers and consultants.

I know this can be done because my colleagues and I have done this before. We tap into a worldwide network of gig workers (data collectors or data contributors) operating from an open source app and with full transparency.

Our projects have involved contributors photographing and documenting sewer access points, bridges, water access points and other infrastructure systems. We even partnered with a major nonprofit on behalf of USAID’s Bureau for Humanitarian Assistance to bolster its Water, Sanitation and Hygiene (WASH) Program by providing rapid WASH needs assessments wherein our contributors can be mobilized on an emergency basis to provide photographs and other data on water access, sanitation and hygiene in Colombia.

Why can’t we do the same for bridges, roads, tunnels and other infrastructure here in the U.S.? This technology needs to be scaled, and we know it can be done.

It’s simple — and the solution is in plain sight. Our smartphones and gig workers allow us to set priorities using their photos and input from what their eyes are seeing, and then professional experts can follow up to begin implementation. There are already provisions in the Senate bill that could provide funding for this type of advanced technology research. And there is an ongoing need, even after repairs are done, to monitor the condition of our highways, bridges and tunnels.

Using this gig-worker-enabled smartphone technology will not only help our federal, state and local governments set priorities quickly; it will also allow thousands of everyday Americans to be part of the rebuilding process. This has the added benefit of democratizing the job of fixing our infrastructure and creating a grassroots movement of people using their own smartphones to help rebuild and repair U.S. infrastructure for the current and future generations.

#column, #congress, #gig-economy, #infrastructure, #labor, #opinion, #smartphone, #tc, #united-states

The past, present and future of IoT in physical security

When Axis Communications released the first internet protocol (IP) camera after the 1996 Olympic games in Atlanta, there was some initial confusion. Connected cameras weren’t something the market had been clamoring for, and many experts questioned whether they were even necessary.

Today, of course, traditional analog cameras have been almost completely phased out as organizations have recognized the tremendous advantage that IoT devices can offer, but that technology felt like a tremendous risk during those early days.

To say that things have changed since then would be a dramatic understatement. The growth of the Internet of Things (IoT) represents one of the ways physical security has evolved. Connected devices have become the norm, opening up exciting new possibilities that go far beyond recorded video. Further developments, such as the improvement and widespread acceptance of the IP camera, have helped power additional breakthroughs including improved analytics, increased processing power, and the growth of open-architecture technology. On the 25th anniversary of the initial launch of the IP camera, it is worth reflecting on how far the industry has come — and where it is likely to go from here.

Tech improvements herald the rise of IP cameras

Comparing today’s IP cameras to those available in 1996 is almost laughable. While they were certainly groundbreaking at the time, those early cameras could record just one frame every 17 seconds — quite a change from what cameras can do today.

But despite this drawback, those on the cutting edge of physical security understood what a monumental breakthrough the IP camera could represent. After all, creating a network of cameras would enable more effective remote monitoring, which — if the technology could scale — would enable them to deploy much larger systems, tying together disparate groups of cameras. Early applications might include watching oil fields, airport landing strips or remote cell phone towers. Better still, the technology had the potential to usher in an entirely new world of analytics capabilities.

Of course, better chipsets were needed to make that endless potential a reality. Groundbreaking or not, the limited frame rate of the early cameras was never going to be effective enough to drive widespread adoption of traditional surveillance applications. Solving this problem required a significant investment of resources, but before long these improved chipsets brought IP cameras from one frame every 17 seconds to 30 frames per second. Poor frame rate could no longer be listed as a justification for shunning IP cameras in favor of their analog cousins, and developers could begin to explore the devices’ analytics potential.

Perhaps the most important technological leap was the introduction of embedded Linux, which made IP cameras more practical from a developer point of view. During the 1990s, most devices used proprietary operating systems, which made them difficult to develop for.

Even within the companies themselves, proprietary systems meant that developers had to be trained on a specific technology, costing companies both time and money. There were a few attempts at standardization within the industry, such as the Wind River operating system, but these ultimately failed. They were too small, with limited resources behind them — and besides, a better solution already existed: Linux.

Linux offered a wide range of benefits, not the least of which was the ability to collaborate with other developers in the open source community. This was a road that ran two ways. Because most IP cameras lacked the hard disk necessary to run Linux, hardware known as JFFS was developed that would allow a device to use a Flash memory chip as a hard disk. That technology was contributed to the open source community, and while it is currently on its third iteration, it remains in widespread use today.

Compression technology represented a similar challenge, with the more prominent data compression models in the late ’90s and early 2000s poorly suited for video. At the time, video storage involved individual frames being stored one-by-one — a data storage nightmare. Fortunately, the H.264 compression format, which was designed with video in mind, became much more commonplace in 2009.

By the end of that year, more than 90% of IP cameras and most video management systems used the H.264 compression format. It is important to note that improvements in compression capabilities have also enabled manufacturers to improve their video resolution as well. Before the new compression format, video resolution had not changed since the ’60s with NTSC/PAL. Today, most cameras are capable of recording in high definition (HD).

1996: First IP camera is released.
2001: Edge-based analytics with video motion detection arrive.
2006: First downloadable, edge-based analytics become available.
2009: Full HD becomes the standard video resolution; H.264 compression goes mainstream.
2015: Smart compression revolutionizes video storage.

The growth of analytics

Analytics is not exactly a “new” technology — customers requested various analytics capabilities even in the early days of the IP camera — but it is one that has seen dramatic improvement. Although it might seem quaint by today’s high standards, video motion detection was one of the earliest analytics loaded onto IP cameras.

Customers needed a way to detect movement within certain parameters to avoid having a tree swaying in the wind, or a squirrel running by, trigger a false alarm. Further refinement of this type of detection and recognition technology has helped automate many aspects of physical security, triggering alerts when potentially suspicious activity is detected and ensuring that it is brought to human attention. By taking human fallibility out of the equation, analytics has turned video surveillance from a reactive tool to a proactive one.

Reliable motion detection remains one of the most widely used analytics, and while false alarms can never be entirely eliminated, modern improvements have made it a reliable way to detect potential intruders. Object detection is also growing in popularity and is increasingly capable of classifying cars, people, animals and other objects.

License plate recognition is popular in many countries (though less so in the United States), not just for identifying vehicles involved in criminal activity, but for uses as simple as parking recognition. Details like car model, shirt color or license plate number are easy for the human eye to miss or fail to notice — but thanks to modern analytics, that data is cataloged and stored for easy reference. The advent of technology like deep learning, which features better pattern recognition and object classification through improved labeling and categorization, will drive further advancements in this area of analytics.

The rise of analytics also helps highlight why the security industry has embraced open-architecture technology. Simply put, it is impossible for a single manufacturer to keep up with every application that its customers might need. By using open-architecture technology, they can empower those customers to seek out the solutions that are right for them, without the need to specifically tailor the device for certain use cases. Hospitals might look to add audio analytics to detect signs of patient distress; retail stores might focus on people counting or theft detection; law enforcement might focus on gunshot detection — with all of these applications housed within the same device model.

It is also important to note that the COVID-19 pandemic drove interesting new uses for both physical security devices and analytics — though some applications, such as using thermal cameras for fever measurement, proved difficult to implement with a high degree of accuracy. Within the healthcare industry, camera usage increased significantly — something that is unlikely to change. Hospitals have seen the benefit of cameras within patient rooms, with video and intercom technology enabling healthcare professionals to monitor and communicate with patients while maintaining a secure environment.

Even simple analytics like cross-line detection can generate an alert if a patient who is a fall risk attempts to leave a designated area, potentially reducing accidents and overall liability. The fact that analytics like this bear only a passing mention today highlights how far physical security has come since the early days of the IP camera.

Looking to the future of security

That said, an examination of today’s trends can provide a glimpse into what the future might hold for the security industry. For instance, video resolution will certainly continue to improve.

Ten years ago, the standard resolution for video surveillance was 720p (1 megapixel), and 10 years before that it was the analog NTSC/PAL resolution of 572×488, or 0.3 megapixels. Today, the standard resolution is 1080p (2 megapixels), and a healthy application of Moore’s law indicates that 10 years from now it will be 4K (8 megapixels).

As ever, the amount of storage that higher-resolution video generates is the limiting factor, and the development of smart storage technologies such as Zipstream has helped tremendously in recent years. We will likely see further improvements in smart storage and video compression that will help make higher-resolution video possible.

Cybersecurity will also be a growing concern for both manufacturers and end users.

Recently, one of Sweden’s largest retailers was shut down for a week because of a hack, and others will meet the same fate if they continue to use poorly secured devices. Any piece of software can contain a bug, but only developers and manufacturers committed to identifying and fixing these potential vulnerabilities can be considered reliable partners. Governments across the globe will likely pass new regulations mandating cybersecurity improvements, with California’s recent IoT protection law serving as an early indicator of what the industry can expect.

Finally, ethical behavior will continue to become more important. A growing number of companies have begun foregrounding their ethics policies, issuing guidelines for how they expect technology like facial recognition to be used — not abused.

While new regulations are coming, it’s important to remember that regulation always lags behind, and companies that wish to have a positive reputation will need to adhere to their own ethical guidelines. More and more consumers now list ethical considerations among their major concerns—especially in the wake of the COVID-19 pandemic—and today’s businesses will need to strongly consider how to broadcast and enforce responsible product use.

Change is always around the corner

Physical security has come a long way since the IP camera was introduced, but it is important to remember that these changes, while significant, took place over more than two decades. Changes take time — often more time than you might think. Still, it is impossible to compare where the industry stands today to where it stood 25 years ago without being impressed. The technology has evolved, end users’ needs have shifted, and even the major players in the industry have come and gone according to their ability to keep up with the times.

Change is inevitable, but careful observation of today’s trends and how they fit into today’s evolving security needs can help today’s developers and device manufacturers understand how to position themselves for the future. The pandemic highlighted the fact that today’s security devices can provide added value in ways that no one would have predicted just a few short years ago, further underscoring the importance of open communication, reliable customer support and ethical behavior.

As we move into the future, organizations that continue to prioritize these core values will be among the most successful.

#column, #facial-recognition, #hardware, #internet-protocol, #ip-camera, #linux, #opinion, #physical-security, #security, #surveillance, #tc

What minority founders must consider before entering the venture-backed startup ecosystem

Funding for Black entrepreneurs in the U.S. hit nearly $1.8 billion in the first half of 2021 — a fourfold increase from the previous year. But most venture-backed startups are “still overwhelmingly white, male, Ivy-League-educated and based in Silicon Valley,” according to a study conducted by RateMyInvestor and Diversity VC.

With venture investors committing to funding Black and minority founders, alongside the growing availability of government-backed proposals, such as New Jersey allocating $10 million to a seed fund for Black and Latinx startups, can we expect to see fundamental change? Or will we have to repeat the same conversations about representation failings within VC funds?

Crunchbase examined the access to capital in the venture-backed startup ecosystem and proved that many industry leaders still worry that nothing will drastically shift. As a Black fintech founder, I believe that venture investors are making safe bets and investing in late-stage founders instead of early or even pre-seed stages.

But what about those minority founders who don’t have family, friends or connections to lean on for the first $250,000? Venture funding does remain elusive, but here are some tricks for startup founders to hack the system.

Realize you are up against an outdated system

Getting your foot in the door with new venture capitalist partners is challenging, and it is often easy for minority founders to be naive at first. I thought that reading TechCrunch and analyzing other VC deals I saw in the news would help me land multiple responses and speak the language of those who managed to score million-dollar deals for their startups. However, I didn’t receive a single response while other founders received VC investment for basic ideas.

This is something I had to learn the hard way: What you hear in the media or read on a company blog post often simplifies the process, and sometimes fails to cover the trajectory that minority founders, in particular, must follow to secure funding.

I experienced hundreds of rejections before raising $2 million to start a mobile payment platform, Bleu, using beacon technology to drive simple and secure payments. It is a huge mountain to climb and a full-time job to continuously pitch your vision and yourself to reach the first meeting with a VC fund — and that’s still miles away from a funding discussion.

These discussions then bring further biases to the surface. If you sat in the conference rooms or on those Zoom calls and heard the types of deals proposed to minority founders, you’d see how offensive they can be. Often, these founders are offered all the money they have requested — but don’t be fooled. It is usually not given all at once due to what I consider to be a lack of trust. Essentially, interval funding equates to being babysat.

Therefore, as a minority founder, you have to realize that it will be a long ride, and you will face rejections because you are at a disadvantage before even opening your mouth to pitch your idea. It is all possible, but patience is key.

Think of the worst-case scenario

Once I figured out how complicated the funding process was, my coping mechanism was to figure out how to capitalize on the business ideas I already had in place in case I never received any VC funding.

Think: How could you make money without an institutional investor, friends, family or internal networks? You’ll be surprised by your entrepreneurial thirst for success when you’ve experienced 100 rejections. This is why minority businesses caught in these testing situations can quickly gain the upper hand, whether through ancillary and side businesses or crowdfunding over GoFundMe and Kickstarter.

Although generally considered non-essential, ancillary companies do provide a regular flow of income and services to assist your core business idea. Most importantly, a recurring revenue stream outside your core business demonstrates to investors that you can create valuable products and acquire loyal customers.

Make sure to find a niche market and carry out surveys with potential clients to find out what specific needs they have. Then, build a product with their feedback in mind and launch it to beta clients. When you publicly release the product, find resellers to keep internal headcount low and generate recurring revenue.

Don’t take ancillaries lightly, though; they are not just a side business. There can be payment issues if you get hooked on them for revenue, distractions from clients or partners wanting custom requests, and supply chain problems.

In my case, I built a point-of-sale (POS) software platform to sell to merchants, which gave me a different revenue stream that could integrate with Bleu’s payment technology. These ancillary businesses can help fund your core business until you manage to plan how to launch fully or source further funding.

In 2019, The New York Times published an article headlined “More Start-Ups Have an Unfamiliar Message for Venture Capitalists: Get Lost.” It highlights how more and more entrepreneurs shunned by the VC funding route are turning to alternatives and forming counter-movements. There are always alternatives to look at if the fundraising process is proving to be too arduous.

Make serious headway with accelerators

Accelerators allow ventures to define their products or services, quickly build networks and, most importantly, sit at tables they wouldn’t be able to on their own. Applying to accelerators as a minority founder was the real turning point for me because I met a crucial investor who allowed us to build credibility and open up to new networks, investors and clients.

I would suggest looking out for accelerators explicitly searching for minority founders by using platforms such as F6S. They match you with accelerators and early growth programs committed to innovation in various global industries, like financial technology. That’s how I found the VC FinTech Accelerator in 2016, where one-third of founders were from minority backgrounds.

Then, Bleu earned a spot in the 2020 class of the IBM Hyper Protect Accelerator dedicated to supporting innovative startups in fintech and health tech industries. These types of accelerators offer startups workshops, technical and business mentorship, and access to a network of partners, customers and stakeholders.

You can impress accelerators by creating a pitch deck and a company video less than two minutes long that shows your founder and the product, and engaging with the fintech community to spread the news.

The other alternative to accelerators is government funds, but they have had little success investing in startups for myriad reasons. It tends to be a more hands-off approach as government funds are not under significant pressure from limited partners (LPs, either institutional or individual investors) to perform.

What you need as a minority founder is an investor who is an active partner but, with government-backed funds, there is less demand to return the capital. We have to ask ourselves whether governments are really searching for the best minority-owned startups to help them get sufficient returns.

Tap into foreign markets

There are many unconscious social stigmas, stereotypes and unseen biases that exist in the U.S. And you’ll find those cultural dynamics are radically different in other countries that don’t have the same history of discrimination, especially when looking at a team or assessing founders.

I also noticed that, as well as reduced bias, investors out of Southeast Asia, Nordic countries and Australia seemed far more likely to take risks on new contactless payment technology as cash use decreased across their regions. Take Klarna and Afterpay as examples of fintech success stories.

First, I engaged in market research and pored over annual reports to decide whether I should look abroad for funding, instead of applying to funds closer to home. I looked at Nielsen reports, payment publications, PaymentSource and numerous government documents or white papers to figure out the cash usage globally.

My investigations revealed that fintech in Australia was far ahead of the curve, with four-fifths of the population using contactless payments. The financial services sector is also the largest contributor to the national economy, contributing around $140 billion to GDP a year. Therefore, I spoke to the Australian Department of Foreign Affairs and Trade in the U.S., and they recommended some regulatory payment groups.

I immediately flew to Australia to meet with the banking community, and I was able to find an Australian investor by word of mouth who was surrounded by the demand for mobile payment solutions.

In contrast, an investor in the U.S. still using cash and card had no interest in what I had to say. This highlights the importance of market research and seeking out investors rather than waiting for them to come to you. There is no science to it; leverage your network and reach out to people over LinkedIn, too.

The need to diversify the VC industry internally

VC funding needs to become more inclusive for women and minority groups by tackling the pipeline problem and addressing the level of diversity within VC funds. All of the networks that VCs reach out to first tend to come from university programs at Stanford, MIT and Harvard. These more privileged and wealthy students are able to easily leverage the traditional and outdated networks built to benefit them.

The number of venture dollars flowing to Black and Latinx founders is dismally low partly due to this knowledge gap; many female and minority founders don’t even know that VC funding is an option for them. Therefore, if you do receive seed funding, spread the news about it within your networks to help others.

Inclusion starts at the educational level but, when the percentage of Black and minority students at these elite colleges are still low, you can see why minority representation is needed in the VC ranks. Even if representation rises by a percent, that would be a significant change.

There are increasing numbers of VC funds announcing initiatives and interest in investing in minority businesses, and I would recommend looking at these in-depth. But what about the demographics of the VC firms? How many ethnicities are present in the executive ranks?

To change the venture-backed startup ecosystem, we need to start at the top and diversify those signing the checks. Looking toward the future, it is Black-led funds, like Sequoia, or others that focus on diversity, like Women’s Venture Fund, BackStage Capital and Elevate Capital Inclusive Fund, that are lighting the way to solutions that will reflect the diversity of the U.S.

It’s up to the investor community at large to be intentional about building relationships with, and ultimately providing funding to, more women and minority-led startups.

Despite the barriers and hurdles minority founders face when searching for VC funding, more and more avenues for acquiring funding are appearing as the disparities are brought to the media’s attention.

As the outdated system adjusts, the key is to continue preparing yourself for rejections and searching for appropriate accelerators to build vital networks. Then, if you aren’t having any luck, consider what you could do with your business idea without the VC funding or turn to foreign markets, which may have a different setup and varied opportunities.

#column, #diversity, #entrepreneurship, #financial-technology, #funding, #opinion, #private-equity, #startups, #tc, #venture-capital

Tech can help solve US cities’ affordability crisis

U.S. cities are in the midst of an affordability crisis. Just between May 2020 and May 2021, home prices saw their biggest annual increase in more than two decades and construction material prices increased by 24%. The cost of renting has risen faster than renters’ incomes for 20 years. Construction needs to play a critical role in fixing these pressing issues, but is the industry ready?

Construction is a $10 trillion global industry that employs more than 200 million people worldwide. But despite its size and importance, the industry’s annual labor productivity has only increased by 0.1% per year since 1947.

Since 1947, we’ve witnessed amazing advances in technology and science. Industries like agriculture, manufacturing and retail have achieved quantum leaps in productivity with improved bioengineering increasing yields and the introduction of cutting-edge logistics bringing affordable consumer goods to the mass market. Labor productivity in these industries increased by over 8x between 1947-2010, compared with 1x in construction.

Why, amid all this progress and innovation, do millions of construction workers in the U.S. still have to rely on manual, pen-and-paper processes for critical parts of their work?

We’ve heavily underinvested in the technology that can help save us from the crisis we face. Historically, entrepreneurs, technologists and investors haven’t spent the time to understand the specific needs and workflows of the construction industry.

Today, more than $800 billion a year is spent on commercial construction, but just a tiny fraction of that goes toward construction technology. In recent years, construction ranked lowest of all industries for technology spending as a percentage of revenue — coming in at just 1.5% — far below the all-industry average of 3.3%, let alone industries like banking, which came in at 7.2%.

A massive chunk of that annual spending — more than $250 billion a year — is spent on construction materials. And they’re only getting more expensive. Materials represent roughly a third of a project’s cost, yet most contractors have to rely on manual workarounds created long before the invention of smartphones to order materials.

This results in both workers on the job site and in the office being overburdened and spending far too much valuable time on paperwork, chasing down materials and fixing errors.

Office teams receive hundreds, if not thousands, of materials requests from the field, all in different formats — including requisitions written with a marker on pizza boxes. They have to manually convert handwritten requisitions into purchase orders sent to suppliers via email, spreadsheets and PDFs, retype order information into their accounting systems, and play phone tag with their suppliers and field teams to keep tabs on order statuses.

Unfortunately, all of that chaos often leads to mistakes, missed opportunities to buy at the best price and project delays.

The mayhem continues for accounting teams, who have no easy way to reconcile their invoices or know if they’re paying the right amount, let alone track rebates and payment terms across different vendors.

Meanwhile, foremen — whose time is more valuable than ever in the current labor squeeze — are often spending less than 30% of their time doing what they do best: building. Without an easy way to select the exact materials they need and track them to delivery, cases of the wrong materials showing up at the wrong time are too common, throwing project timelines off track and creating a huge amount of waste.

Technology can make ordering and managing materials much easier, allowing workers on site and in the office to focus on other critical tasks. It can also help contractors catch common errors before they derail a project and help us build in a more environmentally sustainable way.

Buildings are more than bricks and mortar; they’re hospitals, schools, homes and small businesses. The buildings that surround us quite literally shape our lives. Our communities need them — we need places to meet, learn, play and heal. Imagine if the things that we rely on to create vibrant communities were cheaper to fix — or faster to build?

A new generation of workers who grew up with phones in their pockets are now joining the construction industry and expecting change. By fixing the broken supply chain, we can make construction faster, cheaper and more efficient.

We can move forward and solve our most urgent needs as a society — from building affordable housing to fixing our nation’s infrastructure — and make our cities more affordable and accessible to all.

#column, #economy, #manufacturing, #opinion, #real-estate, #sustainability, #tc, #united-states

The legal world needs to shed its ‘unicorniphobia’

Once upon a time, a successful startup that reached a certain maturity would “go public” — selling securities to ordinary investors, perhaps listing on a national stock exchange and taking on the privileges and obligations of a “public company” under federal securities regulations.

Times have changed. Successful startups today are now able to grow quite large without public capital markets. Not so long ago, a private company valued at more than $1 billion was rare enough to warrant the nickname “unicorn.” Now, over 800 companies qualify.

Legal scholars are worried. A recent wave of academic papers makes the case that because unicorns are not constrained by the institutional and regulatory forces that keep public companies in line, they are especially prone to risky and illegal activities that harm investors, employees, consumers and society at large.

The proposed solution, naturally, is to bring these forces to bear on unicorns. Specifically, scholars are proposing mandatory IPOs, significantly expanded disclosure obligations, regulatory changes designed to dramatically increase secondary-market trading of unicorn shares, expanded whistleblower protections for unicorn employees and stepped-up Securities and Exchange Commission enforcement against large private companies.

This position has also been gaining traction outside the ivory tower. One leader of this intellectual movement was recently appointed director of the SEC’s Division of Corporation Finance. Big changes may be coming soon.

In a new paper titled “Unicorniphobia” (forthcoming in the Harvard Business Law Review), I challenge this suddenly dominant view that unicorns are especially dangerous and should be “tamed” with bold new securities regulations. I raise three main objections.

First, pushing unicorns toward public company status may not help and may actually make problems worse. According to the vast academic literature on “market myopia” or “stock-market short-termism,” it is public company managers who have especially dangerous incentives to take on excessive leverage and risk; to underinvest in compliance; to sacrifice product quality and safety; to slash R&D and other forms of corporate investment; to degrade the environment; and to engage in accounting fraud and other corporate misconduct, among many other things.

The dangerous incentives that produce this parade of horrible outcomes allegedly flow from a constellation of market, institutional, cultural and regulatory features that operate distinctly on public companies, not unicorns, including executive compensation linked to short-term stock performance, pressure to meet quarterly earnings projections (aka “quarterly capitalism”) and the persistent threat (and occasional reality) of a hedge fund activist attack. To the extent this literature is correct, the proposed unicorn reforms would merely amount to forcing companies to shed one set of purportedly dangerous incentives for another.

Second, proponents of new unicorn regulations rely on rhetorical sleight of hand. To show that unicorns pose unique dangers, these advocates rely heavily on anecdotes and case studies of well-known “bad” unicorns, especially the cases of Uber and Theranos, in their papers. Yet the authors make few or no attempts to show how their proposed reforms would have mitigated any significant harm caused by either of these companies — a highly questionable proposition, as I show in great detail in my paper.

Take Theranos, whose founder and CEO Elizabeth Holmes is currently facing trial on charges of criminal fraud and, if convicted, faces a possible sentence of up to 20 years in federal prison. Would any of the proposed securities regulation reforms have plausibly made a positive difference in this case? Allegations that Holmes and others lied extensively to the media, doctors, patients, regulators, investors, business partners and even their own board of directors make it hard to believe they would have been any more truthful had they been forced to make some additional securities disclosures.

As to the proposal to enhance trading of unicorn shares in order to incentivize short sellers and market analysts to sniff out potential frauds, the fact is that these market players already had the ability and incentive to make these plays against Theranos indirectly by taking a short position in its public company partners like Walgreens, or a long position in its public company competitors, like LabCorp and Quest Diagnostics. They failed to do so. Proposals to expand whistleblower protections and SEC enforcement in this domain seem equally unlikely to have made any difference.

Finally, the proposed reforms risk doing more harm than good. Successful unicorns today benefit not only their investors and managers, but also their employees, consumers and society at large. And they do so precisely because of the features of current regulations that are now up on the regulatory chopping block. Altering this regime as these papers propose would put these benefits in jeopardy and thus may do more harm than good.

Consider one company that recently generated an enormous social benefit: Moderna. Before going public in December 2018, Moderna was a secretive, controversial, overhyped biotech unicorn without a single product on the market (or even in Phase 3 clinical trials), barely any scientific peer-reviewed publications, a history of turnover among high-level scientific personnel, a CEO with a penchant for over-the-top claims about the company’s potential and a toxic work culture.

Had these proposed new securities regulations been in place during Moderna’s “corporate adolescence,” it’s quite plausible that they would have significantly disrupted the company’s development. In fact, Moderna might not have been in a position to develop its highly effective COVID-19 vaccine as rapidly as it did. Our response to the coronavirus pandemic has benefited, in part, from our current approach to securities regulation of unicorns.

The lessons from Moderna also bear on efforts to use securities regulation to combat climate change. According to a recent report, 43 unicorns are operating in “climate tech,” developing products and services designed to mitigate or adapt to global climate change. These companies are risky. Their technologies may fail; most probably will. Some are challenging entrenched incumbents that have powerful incentives to do whatever is necessary to resist the competitive threat. Some may be trying to change well-established consumer preferences and behaviors. And they all face an uncertain regulatory environment, varying widely across and within jurisdictions.

Like other unicorns, they may have highly empowered founder CEOs who are demanding, irresponsible or messianic. They may also have core investors who do not fully understand the science underlying their products, are denied access to basic information and who press the firm to take risks to achieve astronomical results.

And yet, one or more of these companies may represent an important resource for our society in dealing with disruptions from climate change. As policymakers and scholars work out how securities regulation can be used to address climate change, they should not overlook the potentially important role unicorn regulation can play.

#climate-change, #column, #government, #opinion, #policy, #secondary-markets, #theranos, #uber, #unicorns, #venture-capital, #venture-law

20 years later, unchecked data collection is part of 9/11’s legacy

Almost every American adult remembers, in vivid detail, where they were the morning of September 11, 2001. I was on the second floor of the West Wing of the White House, at a National Economic Council Staff meeting — and I will never forget the moment the Secret Service agent abruptly entered the room, shouting: “You must leave now. Ladies, take off your high heels and go!”

Just an hour before, as the National Economic Council White House technology adviser, I was briefing the deputy chief of staff on final details of an Oval Office meeting with the president, scheduled for September 13. Finally, we were ready to get the president’s sign-off to send a federal privacy bill to Capitol Hill — effectively a federal version of the California Privacy Rights Act, but stronger. The legislation would put guardrails around citizens’ data — requiring opt-in consent for their information to be shared, governing how their data could be collected and how it would be used.

But that morning, the world changed. We evacuated the White House and the day unfolded with tragedy after tragedy sending shockwaves through our nation and the world. To be in D.C. that day was to witness and personally experience what felt like the entire spectrum of human emotion: grief, solidarity, disbelief, strength, resolve, urgency … hope.

Much has been written about September 11, but I want to spend a moment reflecting on the day after.

When the National Economic Council staff came back into the office on September 12, I will never forget what Larry Lindsey, our boss at the time, told us: “I would understand it if some of you don’t feel comfortable being here. We are all targets. And I won’t appeal to your patriotism or faith. But I will — as we are all economists in this room — appeal to your rational self-interest. If we back away now, others will follow, and who will be there to defend the pillars of our society? We are holding the line here today. Act in a way that will make this country proud. And don’t abandon your commitment to freedom in the name of safety and security.”

There is so much to be proud of about how the country pulled together and how our government responded to the tragic events on September 11. First, however, as a professional in the cybersecurity and data privacy field, I reflect on Larry’s advice, and many of the critical lessons learned in the years that followed — especially when it comes to defending the pillars of our society.

Even though our collective memories of that day still feel fresh, 20 years have passed, and we now understand the vital role that data played in the months leading up to the 9/11 terrorist attacks. But, unfortunately, we failed to connect the dots that could have saved thousands of lives by holding intelligence data too closely in disparate locations. These data silos obscured the patterns that would have been clear if only a framework had been in place to share information securely.

So, we told ourselves, “Never again,” and government officials set out to increase the amount of intelligence they could gather — without thinking through significant consequences for not only our civil liberties but also the security of our data. So, the Patriot Act came into effect, with 20 years of surveillance requests from intelligence and law enforcement agencies crammed into the bill. Having been in the room for the Patriot Act negotiations with the Department of Justice, I can confidently say that, while the intentions may have been understandable — to prevent another terrorist attack and protect our people — the downstream negative consequences were sweeping and undeniable.

Domestic wiretapping and mass surveillance became the norm, chipping away at personal privacy, data security and public trust. This level of surveillance set a dangerous precedent for data privacy, meanwhile yielding marginal results in the fight against terrorism.

Unfortunately, the federal privacy bill that we had hoped to bring to Capitol Hill the very week of 9/11 — the bill that would have solidified individual privacy protections — was mothballed.

Over the subsequent years, it became easier and cheaper to collect and store massive amounts of surveillance data. As a result, tech and cloud giants quickly scaled up and dominated the internet. As more data was collected (both by the public and the private sectors), more and more people gained visibility into individuals’ private data — but no meaningful privacy protections were put in place to accompany that expanded access.

Now, 20 years later, we find ourselves with a glut of unfettered data collection and access, with behemoth tech companies and IoT devices collecting data points on our movements, conversations, friends, families and bodies. Massive and costly data leaks — whether from ransomware or simply misconfiguring a cloud bucket — have become so common that they barely make the front page. As a result, public trust has eroded. While privacy should be a human right, it’s not one that’s being protected — and everyone knows it.

This is evident in the humanitarian crisis we have seen in Afghanistan. Just one example: Tragically, the Taliban have seized U.S. military devices that contain biometric data on Afghan citizens who supported coalition forces — data that would make it easy for the Taliban to identify and track down those individuals and their families. This is a worst-case scenario of sensitive, private data falling into the wrong hands, and we did not do enough to protect it.

This is unacceptable. Twenty years later, we are once again telling ourselves, “Never again.” 9/11 should have been a reckoning of how we manage, share and safeguard intelligence data, but we still have not gotten it right. And in both cases — in 2001 and 2021 — the way we manage data has a life-or-death impact.

This is not to say we aren’t making progress: The White House and U.S. Department of Defense have turned a spotlight on cybersecurity and Zero Trust data protection this year, with an executive order to spur action toward fortifying federal data systems. The good news is that we have the technology we need to safeguard this sensitive data while still making it shareable. In addition, we can put contingency plans in place to prevent data that falls into the wrong hands. But, unfortunately, we just aren’t moving fast enough — and the slower we solve this problem of secure data management, the more innocent lives will be lost along the way.

Looking ahead to the next 20 years, we have an opportunity to rebuild trust and transform the way we manage data privacy. First and foremost, we have to put some guardrails in place. We need a privacy framework that gives individuals autonomy over their own data by default.

This, of course, means that public- and private-sector organizations have to do the technical, behind-the-scenes work to make this data ownership and control possible, tying identity to data and granting ownership back to the individual. This is not a quick or simple fix, but it’s achievable — and necessary — to protect our people, whether U.S. citizens, residents or allies worldwide.

To accelerate the adoption of such data protection, we need an ecosystem of free, accessible and open source solutions that are interoperable and flexible. By layering data protection and privacy in with existing processes and solutions, government entities can securely collect and aggregate data in a way that reveals the big picture without compromising individuals’ privacy. We have these capabilities today, and now is the time to leverage them.

Because the truth is, with the sheer volume of data that’s being gathered and stored, there are far more opportunities for American data to fall into the wrong hands. The devices seized by the Taliban are just a tiny fraction of the data that’s currently at stake. As we’ve seen so far this year, nation-state cyberattacks are escalating. This threat to human life is not going away.

Larry’s words from September 12, 2001, still resonate: If we back away now, who will be there to defend the pillars of our society? It’s up to us — public- and private-sector technology leaders — to protect and defend the privacy of our people without compromising their freedoms.

It’s not too late for us to rebuild public trust, starting with data. But, 20 years from now, will we look back on this decade as a turning point in protecting and upholding individuals’ right to privacy, or will we still be saying, “Never again,” again and again?

#column, #counter-terrorism, #department-of-justice, #digital-rights, #mass-surveillance, #national-security, #opinion, #policy, #privacy, #taliban, #zero-trust

Why do the media always pit labor against capital?

The uproar that arose after Dolly Parton rewrote the lyrics to “9 to 5” for a Squarespace Super Bowl commercial revealed a problem with the English language: A worker is no longer a worker.

As she sang in celebration of entrepreneurs:

“Working 5 to 9
you’ve got passion and a vision
‘Cause it’s hustlin’ time
a whole new way to makе a livin’
Gonna change your life
do something that givеs it meaning…”

Some criticized it, saying it celebrated an “empty promise” of capitalism, as if people aiming to establish their own businesses were “workers” who needed to be protected from powerful corporations. Others grasped that there is more nuance in our economy than ever before and that, perhaps, Parton was on to something.

In fact, her updated lyrics represent a shift in the primacy between capital and labor in the 40 years since she penned the original. Gone is the idea that getting ahead is only a “rich man’s game… puttin’ money in his wallet.” Workers today have a different potential than they did in 1980 when she first sang:

“There’s a better life
And you think about it, don’t you?
It’s a rich man’s game
No matter what they call it,
And you spend your life
Puttin’ money in his wallet.”

There are abusive corporations, and we do need a better social safety net so that people aren’t at the mercy of the doctrine of shareholder primacy, but that truth disguises a more complicated reality. The divide between capital and labor increasingly looks like an anachronism, a throwback to the language and illusory simplicity of another time. Yet still, the media persists in pushing this false dichotomy; this mistaken idea that labor and capital are two separate and oppositional forces in our economy. Perhaps doing so is human nature.

Or perhaps it simply sells more newspapers or generates more clicks. The media certainly thrives on conflict (real or imaginary) and, along with human nature to try to group things into black and white, the continued framing of our economy as somehow consisting of individual actors who exist solely on one side of the capital/labor line makes for easier narratives.

The truth of this aspect of our economy, as with most things, exists in the gray areas. In the nuance and the movement between groups. The U.S. economy has always been uniquely entrepreneurial, from the discovery of the “new land” to the formation of our government to the expansion of our country and eventually its industrialization. Entrepreneurs have long led the way. Today, nearly 60 million people are entrepreneurial in some way.

The vast majority inhabit the frontlines of the economy. They are freelancers or the late-night business starters that Parton sang about. They are freelancing on the side to earn money to support some other dream, or are stitching together lives for themselves by being their own boss. They’re driving Ubers, delivering meals for GrubHub and selling their crafts on Etsy. Never have more people had more access to expand their horizons through pursuing their entrepreneurial dreams than right now. And they exist in the world of technology, where a single person at a kitchen table has the same power to bring an innovation to market as giant corporations did four decades ago.

Victor Hwang, CEO of Right to Start and a former vice president of entrepreneurship for the Kauffman Foundation, described the capital-versus-labor debate as “the biggest false narrative out there. It’s an artificial narrative that we’ve created: employer versus employee; big versus small; corporation versus worker. All are false narratives and contribute to the incorrect notion that the most important fight in our economy exists between these supposedly oppositional forces.”

But our economic and government funding debates are framed, often by the media, around the idea of capitalism versus socialism, corporations versus workers. That increasingly divisive conversation has some of the hallmarks of a deliberately engineered division, like the ones over climate change or gun rights. Right-wing groups with an interest in freezing the government into inaction figured out how to divide the country into two groups and get them fighting.

Why don’t we have universal health care, parental leave, working infrastructure — all things that would, not incidentally, boost entrepreneurship and small business? We’ve been too busy fighting about a socialist takeover and the evils of capitalism.

The conflict thrives in part because we don’t have the right language to describe what’s happening now: “These debates should be viewed as part of a larger discussion,” Hwang said. “We should be striving to encourage highly innovative people and companies. What are the categories we need to develop? How do you classify someone’s role in the economy?”

What we need as an economy is a system that empowers more people to be producers and entrepreneurs. To solve problems and look for opportunities to create change in their communities. Instead, we’ve built a system that supports incumbents; that thrives on the status quo; that stifles innovation and uses the tactics of division to do so. It’s a tension that stems from our neoliberal worldview that achieved an almost consensus in the late 20th and early 21st centuries.

Beyond just arguing that free markets and open trade make it easier and better to do business (which we generally agree with), it also implied that the only thing that mattered in our economy was making big companies bigger (while, perhaps, allowing for the occasional upstart — but only those that had the potential to grow quickly and become big companies themselves). Lost was the value of smaller businesses, operating in the in-between spaces in our economy. We don’t even effectively measure their impact.

Wanting to know how the “economy” is doing, we look no further than the fate of the 500 largest publicly traded companies (the S&P 500) or the 30 massive businesses that comprise the Dow Jones Industrial Average. No wonder people across Main Streets are scratching their heads as pundits describe the economy as thriving by citing the continued rise of the Dow when they can see the millions of small businesses closing all around them.

In our book, “The New Builders“, we describe entrepreneurs as “builders.” Builder is a word with Old English roots in the ideas “to be, exist, grow,” according to the Online Dictionary of Etymology. In a century where change is the lingua franca, builders own the value of their own labor as a mechanism to build independence and, eventually, capital.

We often forget that the majority of these builders — the small business owners of America — create opportunities with the most limited resources. According to the Kauffman Foundation, 83% of businesses are formed without the help of either bank financing or venture capital. Yet small businesses are responsible for nearly 40% of U.S. GDP and nearly half of employment. Perhaps that’s why International Economy publisher David Smick termed them “the great equalizer” in his book of the same name.

Technology has fundamentally changed the landscape for businesses of all sizes and has the potential to enable a resurgence of our small business economy. Rather than pushing a false narrative that individuals need to choose between being a part of the labor or capital economies, we should be encouraging fluidity between the two. The more capital ownership we encourage — through savings, investment in their own businesses, and by allowing more and more people to become investors of all kinds — the more we drive wealth creation and open economic activity for generations to come.

A version of this article originally appeared in the Summer 2021 edition of The International Economy Magazine. 

#business, #column, #entrepreneurship, #kauffman-foundation, #labor, #opinion, #small-business, #united-states

Laser-initiated fusion leads the way to safe, affordable clean energy

The quest to make fusion power a reality recently took a massive step forward. The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory announced the results of an experiment with an unprecedented high fusion yield. A single laser shot initiated reactions that released 1.3 megajoules of fusion yield energy with signatures of propagating nuclear burn.

Reaching this milestone indicates just how close fusion actually is to achieving power production. The latest results demonstrate the rapid pace of progress — especially as lasers are evolving at breathtaking speed.

Indeed, the laser is one of the most impactful technological inventions since the end of World War II. Finding widespread use in an incredibly diverse range of applications — including machining, precision surgery and consumer electronics — lasers are an essential part of everyday life. Few know, however, that lasers are also heralding an exciting and entirely new chapter in physics: enabling controlled nuclear fusion with positive energy gain.

After six decades of innovation, lasers are now assisting us in the urgent process of developing clean, dense and efficient fuels, which, in turn, are needed to help solve the world’s energy crisis through large-scale decarbonized energy production. The peak power attainable in a laser pulse has increased every decade by a factor of 1,000.

Physicists recently conducted a fusion experiment that produced 1,500 terawatts of power. For a short period of time, this generated four to five times more energy than what the whole world consumes at a given moment. In other words, we are already able to produce vast amounts of power. Now we also need to produce vast amounts of energy so as to offset the energy expended to drive the igniting lasers.

Beyond lasers, there are also considerable advances on the target side. The recent use of nanostructure targets allows for more efficient absorption of laser energies and ignition of the fuel. This has only been possible for a few years, but here, too, technological innovation is on a steep incline with tremendous advancement from year to year.

In the face of such progress, you may wonder what is still holding us back from making commercial fusion a reality.

There remain two significant challenges: First, we need to bring the pieces together and create an integrated process that satisfies all the physical and technoeconomic requirements. Second, we require sustainable levels of investment from private and public sources to do so. Generally speaking, the field of fusion is woefully underfunded. This is shocking given the potential of fusion, especially in comparison to other energy technologies.

Investments in clean energy amounted to more than $500 billion in 2020. The funds that go into fusion research and development are only a fraction of that. There are countless brilliant scientists working in the sector already, as well as eager students wishing to enter the field. And, of course, we have excellent government research labs. Collectively, researchers and students believe in the power and potential of controlled nuclear fusion. We should ensure financial support for their work to make this vision a reality.

What we need now is an expansion of public and private investment that does justice to the opportunity at hand. Such investments may have a longer time horizon, but their eventual impact is without parallel. I believe that net-energy gain is within reach in the next decade; commercialization, based on early prototypes, will follow in very short order.

But such timelines are heavily dependent on funding and the availability of resources. Considerable investment is being allocated to alternative energy sources — wind, solar, etc. — but fusion must have a place in the global energy equation. This is especially true as we approach the critical breakthrough moment.

If laser-driven nuclear fusion is perfected and commercialized, it has the potential to become the energy source of choice, displacing the many existing, less ideal energy sources. This is because fusion, if done correctly, offers energy that is in equal parts clean, safe and affordable. I am convinced that fusion power plants will eventually replace most conventional power plants and related large-scale energy infrastructure that are still so dominant today. There will be no need for coal or gas.

The ongoing optimization of the fusion process, which results in higher yields and lower costs, promises energy production at much below the current price point. At the limit, this corresponds to a source of unlimited energy. If you have unlimited energy, then you also have unlimited possibilities. What can you do with it? I foresee reversing climate change by taking out the carbon dioxide we have put into the atmosphere over the last 150 years.

With a future empowered by fusion technology, you would also be able to use energy to desalinate water, creating unlimited water resources that would have an enormous impact in arid and desert regions. All in all, fusion enables better societies, keeping them sustainable and clean rather than dependent on destructive, dirty energy sources and related infrastructures.

Through years of dedicated research at the SLAC National Accelerator Laboratory, the Lawrence Livermore National Laboratory and the National Ignition Facility, I was privileged to witness and lead the first inertial confinement fusion experiments. I saw the seed of something remarkable being planted and taking root. I have never been more excited than I am now to see the fruits of laser technology harvested for the empowerment and advancement of humankind.

My fellow scientists and students are committed to moving fusion from the realm of tangibility into that of reality, but this will require a level of trust and help. A small investment today will have a big impact toward providing a much needed, more welcome energy alternative in the global arena.

I am betting on the side of optimism and science, and I hope that others will have the courage to do so, too.

#clean-energy, #column, #fusion-power, #greentech, #laser, #lawrence-livermore-national-laboratory, #nuclear-fusion, #opinion, #science, #tc

Shared micromobility can help build communities residents deserve

Twenty years ago, many of us had never heard of shared micromobility, let alone contemplated it as a tool for developing healthier, more equitable communities.

But as of 2020, more than 200 cities in North America have at least one shared micromobility system in operation with a combined 169,000 vehicles. As the industry has grown, so too has the realization that something as seemingly small as the way people get from place to place can significantly impact their quality of life.

One of the most surprising yet impactful roles that shared micromobility has filled recently is that of a supporter of racial justice initiatives and events.

According to the North American Bikeshare & Scootershare Association’s 2020 Shared Micromobility State of the Industry Report, agencies and operators provided free or discounted trips for demonstrators to get to events, while many systems donated or fundraised for racial justice nonprofits.

Importantly, the increased attention on diversity, equity and inclusion further brought to light our shortcomings and led to organizational change throughout the industry. For example, 71% of shared micromobility systems stated that diversity was part of every hiring decision in 2020, and 69% reported that women and people of color are represented at all levels of the organization.

Of course, we collectively recognize that we are not where we want or should be. However, these metrics demonstrate intention and mark progress toward improved equity, diversity and inclusion in shared micromobility.

We in the shared micromobility industry are continually adapting our policies and practices as we work to fit the needs of the communities we serve, whether providing discount programs for lower-income residents or making adaptive vehicles available for persons of different abilities, we understand that mobility is a right for everyone.

Even more than that, agencies and operators recognize the importance of providing active modes of mobility for people and communities to build healthier habits, which ultimately can have positive economic, social and environmental impacts.

In 2020, North Americans gained an additional 12.2 million hours of physical activity and offset approximately 29 million pounds of carbon dioxide by utilizing shared micromobility.

Additionally, researchers at Colorado State University calculated that in an average year, bike-share users saved the U.S. healthcare system more than $36 million, while another study concluded that scooter users accounted for $921 of unplanned spending per scooter at food and beverage establishments.

Shared micromobility must be considered a part of public transportation networks to maximize the community benefits and build truly functional cities. Multimodal commuting is becoming more commonplace and sought for by urban travelers. In 2020, 50% of riders reported using shared micromobility to connect to transit, and 16% of the 83.4 million shared micromobility trips taken in the same year were for connecting to public transit. Increased use and requirement of the General Bikeshare Feed Specification (GBFS), an open data standard for shared micromobility, clarifies the growing importance of an integrated trip planning user experience.

Shared micromobility is a powerful tool, when fully taken advantage of, that helps transform our cities for the better. As cities, states, provinces and nations face equity, social and climate challenges, now is a critical time to engage shared micromobility as a critical component of change.

#bike-sharing, #column, #diversity, #micromobility, #north-america, #opinion, #scooter-sharing, #shared-micromobility, #tc, #transportation

We must subsidize and regulate space exploration

In 1989, Tim Berners-Lee invented the World Wide Web (popularizing the modern internet). He didn’t protect the technology because he wanted it to benefit us all. Three decades later, most of the power — and a lot of the profits — of the internet are in the hands of a few tech billionaires, and much of the early promise of the internet remains unfulfilled.

To avoid the same fate for space, we need to subsidize new players to create competition and lower costs, as well as regulate space travel to ensure safety.

Space matters. It could create countless jobs and fuel economies, and may even hold the solution to climate change. Investors can already see this, having poured billions into space companies in an industry with a potential market value of $1.4 trillion by 2030.

Space may seem too vast to be dominated by a few tech billionaires, but in 1989, so did the internet. We need to get this right, because from the mechanics and aerospace engineers to the marketing, information and logistics workers, the space industry could fuel global job creation and economic growth.

For that to happen, we need competition. What we have now is a few players operating perhaps for their founders’ benefits, not the world’s.

We should not repeat the mistakes we made with the internet and wait for the technology to be abused before we step in. For example, in the Cambridge Analytica scandal, a private technology company used weapons-grade social media manipulation to pursue their own profit (which is their obligation to their shareholders) but to society’s harm (which it is regulators’ job to protect).

In space, the stakes are even higher. They also affect all of humanity, not a few countries. There are environmental dangers (we are probing the carbon cost of “Earth” flights, but not space flights), and an accident, as well as leading to loss of life in space, could send fatal debris to Earth.

These dangers are not unforeseen. Virgin Galactic had its first fatality in 2014. A Space X launch puts out as much carbon dioxide as flying around 300 people across the Atlantic. Earlier this year, some unguided space debris from a Chinese rocket landed in the Maldives.

We should not wait until these accidents happen again — perhaps at a bigger scale — before we act.

Space tourism can and should be about much more than giving the 1% another Instagrammable moment and increasing the wealth of the billionaires who provide the service.

The space industry should be managed in a way that delivers the most good to the largest number of people. That starts with subsidies.

In short, we should treat space travel like any other form of transit. Making that sustainable economically will almost inevitably require some government intervention.

We have been here before: When the combination of air travel, highways and rising labor costs led the two largest railways in the United States to bankruptcy, the Nixon administration intervened and created Amtrak.

This wasn’t ideologically fueled (quite the opposite). This was a decision to make sure the U.S. reaped the economic benefits of interstate travel. Even though Amtrak remains unprofitable 50 years after its creation, it is a crucial piece of economic infrastructure upon which many other industries — as well as millions of individuals and families — rely.

We need to do the same with space travel. Very few individuals will benefit from what will be an uber-luxury segment of the travel market, with Virgin Galactic tickets predicted to cost $250,000 (and that is the entry-level space travel product; Virgin’s competitors are priced at multiples of that cost).

If we subsidize the industry now, while ensuring there are new competitors in space, we can ensure it hits a critical mass where all the broader benefits of space travel become a reality.

This will be much easier than waiting for monopolies to emerge and then trying to fight them (which is what the U.S. Federal Trade Commission is trying to do, decades too late, to Big Tech).

Space travel is not just hype or the plaything of billionaires. It is the final frontier, both physically and economically.

If we want it to be a success, we should learn from our successes and failures back on Earth and apply them to space now.

That means subsidies, support, regulation and safety. These things are important on Earth, but in space they are absolutely essential.

#aerospace, #column, #government, #opinion, #policy, #space, #space-debris, #space-tourism, #space-travel, #spaceflight, #tc, #virgin-galactic

Startups should look to state-of-the-art tech to tackle diseases affecting women

Startups devoted to reproductive and women’s health are on the rise. However, most of them deal with women’s fertility: birth control, ovulation and the inability to conceive. The broader field of women’s health remains neglected.

Historically, most of our understanding of ailments comes from the perspective of men and is overwhelmingly based on studies using male patients. Until the early 1990s, women of childbearing age were kept out of drug trial studies, and the resulting bias has been an ongoing issue in healthcare. Other issues include underrepresentation of women in health studies, trivialization of women’s physical complaints (which is relevant to the misdiagnosis of endometriosis, among other conditions), and gender bias in the funding of research, especially in research grants.

For example, several studies have shown that when we look at National Institutes of Health funding, a disproportionate share of its resources goes to diseases that primarily affect men — at the expense of those that primarily affect women. In 2019, studies of NIH funding based on disease burden (as estimated by the number of years lost due to an illness) showed that male-favored diseases were funded at twice the rate of female-favored diseases.

Let’s take endometriosis as an example. Endometriosis is a disease where endometrial-like tissue (‘‘lesions’’) can be found outside the uterus. Endometriosis is a condition that only occurs in individuals with uteruses and has been less funded and less studied than many other conditions. It can cause chronic pain, fatigue, painful intercourse and infertility. Although the disease may affect one out of 10 women, diagnosis is still very slow, and the disease is confirmed only by surgery.

There is no non-invasive test available. In many cases, a woman is diagnosed only due to her infertility, and the diagnosis can take up to 10 years. Even after diagnosis, the understanding of disease biology and progression is poor, as well as the understanding of the relationships to other lesion diseases, such as adenomyosis. Current treatments include surgical removal of lesions and drugs that suppress ovarian hormone (mainly estrogen) production.

However, there are changes in the works. The NIH created the women’s health research category in 1994 for annual budgeting purposes and, in 2019, it was updated to include research that is relevant to women only. In acknowledging the widespread male bias in both human and animal studies, the NIH mandated in 2016 that grant applicants would be required to recruit male and female participants in their protocols. These changes are slow, and if we look at endometriosis, it received just $7 million in NIH funding in the fiscal year 2018, putting it near the very bottom of NIH’s 285 disease/research areas.

It is interesting to note that critical changes are coming from other sources, and not so much from the funding agencies or the pharmaceutical industry. The push is coming from patients and physicians themselves that meet the diseases regularly. We see pharmaceutical companies (such as Eli Lilly and AbbVie) in the women’s healthcare space following the lead of their patients and slowly expanding their R&D base and doubling efforts to expand beyond reproductive health into other key women’s health areas.

New technological innovations targeting endometriosis are being funded via private sources. In 2020, women’s health finally emerged as one of the most promising areas of investment. These include (not an exhaustive list by any means) diagnostics companies such as NextGen Jane, which raised a $9 million Series A in April 2021 for its “smart tampon,” and DotLab, a non-invasive endometriosis testing startup, which raised $10 million from investors last July. Other notable advances include the research-study app Phendo that tracks endometriosis, and Gynica, a company focused on cannabis-based treatments for gynecological issues.

The complexity of endometriosis is such that any single biotech startup may find it challenging to go it alone. One approach to tackle this is through collaborations. Two companies, Polaris Quantum Biotech and Auransa, have teamed up to tackle the endometriosis challenge and other women’s specific diseases.

Using data, algorithms and quantum computing, this collaboration between two female-led AI companies integrates the understanding of disease biology with chemistry. Moreover, they are not stopping at in silico; rather, this collaboration aims to bring therapeutics to patients.

New partnerships can majorly impact how fast a field like women’s health can advance. Without such concerted efforts, women-centric diseases such as endometriosis, triple-negative breast cancer and ovarian cancer, to name a few, may remain neglected and result in much-needed therapeutics not moving into clinics promptly.

Using state-of-the-art technologies on complex women’s diseases will allow the field to advance much faster and can put drug candidates into clinics in a few short years, especially with the help of patient advocacy groups, research organizations, physicians and out-of-the-box funding approaches such as crowdfunding from the patients themselves.

We believe that going after the women’s health market is a win-win for the patients as well as from the business perspective, as the global market for endometriosis drugs alone is expected to reach $2.2 billion in the next six years.

#column, #endometriosis, #health, #healthcare, #infertility, #opinion, #startups, #tc, #womens-health

Move fast and break Facebook: A bull case for antitrust enforcement

This is the second post in a series on the Facebook monopoly. The first post explored how the U.S. Federal Trade Commission should define the Facebook monopoly. I am inspired by Cloudflare’s recent post explaining the impact of Amazon’s monopoly in its industry.

Perhaps it was a competitive tactic, but I genuinely believe it more a patriotic duty: guideposts for legislators and regulators on a complex issue. My generation has watched with a combination of sadness and trepidation as legislators who barely use email question the leading technologists of our time about products that have long pervaded our lives in ways we don’t yet understand.

I, personally, and my company both stand to gain little from this — but as a participant in the latest generation of social media upstarts, and as an American concerned for the future of our democracy, I feel a duty to try.


Mark Zuckerberg has reached his Key Largo moment.

In May 1972, executives of the era’s preeminent technology company — AT&T — met at a secret retreat in Key Largo, Florida. Their company was in crisis.

At the time, Ma Bell’s breathtaking monopoly consisted of a holy trinity: Western Electric (the vast majority of phones and cables used for American telephony), the lucrative long distance service (for both personal and business use) and local telephone service, which the company subsidized in exchange for its monopoly.

Over the next decade, all three government branches — legislators, regulators and the courts — parried with AT&T’s lawyers as the press piled on, battering the company’s reputation in the process. By 1982, a consent decree forced AT&T’s dismantling. The biggest company on earth withered to 30% of its book value and seven independent “Baby Bell” regional operating companies. AT&T’s brand would live on, but the business as the world knew it was dead.

Mark Zuckerberg is, undoubtedly, the greatest technologist of our time. For over 17 years, he has outgunned, outsmarted and outperformed like no software entrepreneur before him. Earlier this month, the U.S. Federal Trade Commission refiled its sweeping antitrust case against Facebook.

Its own holy trinity of Facebook Blue, Instagram and WhatsApp is under attack. All three government branches — legislators, regulators and the courts — are gaining steam in their fight, and the press is piling on, battering the company’s reputation in the process. Facebook, the AT&T of our time, is at the brink. For so long, Zuckerberg has told us all to move fast and break things. It’s time for him to break Facebook.

If Facebook does exist to “make the world more open and connected, and not just to build a company,” as Zuckerberg wrote in the 2012 IPO prospectus, he will spin off Instagram and WhatsApp now so that they have a fighting chance. It would be the ultimate Zuckerbergian chess move. Zuckerberg would lose voting control and thus power over all three entities, but in his action he would successfully scatter the opposition. The rationale is simple:

  1. The United States government will break up Facebook. It is not a matter of if; it is a matter of when.
  2. Facebook is already losing. Facebook Blue, Instagram and WhatsApp all face existential threats. Pressure from the government will stifle Facebook’s efforts to right the ship.
  3. Facebook will generate more value for shareholders as three separate companies.

I write this as an admirer; I genuinely believe much of the criticism Zuckerberg has received is unfair. Facebook faces Sisyphean tasks. The FTC will not let Zuckerberg sneeze without an investigation, and the company has failed to innovate.

Given no chance to acquire new technology and talent, how can Facebook survive over the long term? In 2006, Terry Semel of Yahoo offered $1 billion to buy Facebook. Zuckerberg reportedly remarked, “I just don’t know if I want to work for Terry Semel.” Even if the FTC were to allow it, this generation of founders will not sell to Facebook. Unfair or not, Mark Zuckerberg has become Terry Semel.

The government will break up Facebook

It is not a matter of if; it is a matter of when.

In a speech on the floor of Congress in 1890, Senator John Sherman, the founding father of the modern American antitrust movement, famously said, “If we will not endure a king as a political power, we should not endure a king over the production, transportation and sale of any of the necessities of life. If we would not submit to an emperor, we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.”

This is the sentiment driving the building resistance to Facebook’s monopoly, and it shows no sign of abating. Zuckerberg has proudly called Facebook the fifth estate. In the U.S., we only have four estates.

All three branches of the federal government are heating up their pursuit. In the Senate, an unusual bipartisan coalition is emerging, with Senators Amy Klobuchar (D-MN), Mark Warner (D-VA), Elizabeth Warren (D-MA) and Josh Hawley (R-MO) each waging a war from multiple fronts.

In the House, Speaker Nancy Pelosi (D-CA) has called Facebook “part of the problem.” Lina Khan’s FTC is likewise only getting started, with unequivocal support from the White House that feels burned by Facebook’s disingenuous lobbying. The Department of Justice will join, too, aided by state attorneys general. And the courts will continue to turn the wheels of justice, slowly but surely.

In the wake of Facebook co-founder Chris Hughes’ scathing 2019 New York Times op-ed, Zuckerberg said that Facebook’s immense size allows it to spend more on trust and safety than Twitter makes in revenue.

“If what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference,” Zuckerberg said.

This could be true, but it does not prove that the concentration of such power in one man’s hands is consistent with U.S. public policy. And the centralized operations could be rebuilt easily in standalone entities.

Time and time again, whether on Holocaust denial, election propaganda or vaccine misinformation, Zuckerberg has struggled to make quick judgments when presented with the information his trust and safety team uncovers. And even before a decision is made, the structure of the team disincentivizes it from even measuring anything that could harm Facebook’s brand. This is inherently inconsistent with U.S. democracy. The New York Times’ army of reporters will not stop uncovering scandal after scandal, contradicting Zuckerberg’s narrative. The writing is on the wall.

Facebook is losing

Facebook Blue, Instagram and WhatsApp all face existential threats. Pressure from the government will stifle Facebook’s efforts to right the ship.

For so long, Facebook has dominated the social media industry. But if you ask Chinese technology executives about Facebook today, they quote Tencent founder Pony Ma: “When a giant falls, his corpse will still be warm for a while.”

Facebook’s recent demise begins with its brand. The endless, cascading scandals of the last decade have irreparably harmed its image. Younger users refuse to adopt the flagship Facebook Blue. The company’s internal polling on two key metrics — good for the world (GFW) and cares about users (CAU) — shows Facebook’s reputation is in tatters. Talent is fleeing, too; Instacart alone recently poached 55 Facebook executives.

In 2012 and 2014, Instagram and WhatsApp were real dangers. Facebook extinguished both through acquisition. Yet today they represent the company’s two most promising, underutilized assets. They are the underinvested telephone networks of our time.

Weeks ago, Instagram head Adam Mosseri announced that the company no longer considers itself a photo-sharing app. Instead, its focus is entertainment. In other words, as the media widely reported, Instagram is changing to compete with TikTok.

TikTok’s strength represents an existential threat. U.S. children 4 to 15 already spend over 80 minutes a day on ByteDance’s TikTok, and it’s just getting started. The demographics are quickly expanding way beyond teenagers, as social products always have. For Instagram, it could be too little too late — as a part of Facebook, Instagram cannot acquire the technology and retain the talent it needs to compete with TikTok.

Imagine Instagram acquisitions of Squarespace to bolster its e-commerce offerings, or Etsy to create a meaningful marketplace. As a part of Facebook, Instagram is strategically adrift.

Likewise, a standalone WhatsApp could easily be a $100 billion market cap company. WhatsApp has a proud legacy of robust security offerings, but its brand has been tarnished by associations with Facebook. Discord’s rise represents a substantial threat, and WhatsApp has failed to innovate to account for this generation’s desire for community-driven messaging. Snapchat, too, is in many ways a potential WhatsApp killer; its young users use photography and video as a messaging medium. Facebook’s top augmented reality talents are leaving for Snapchat.

With 2 billion monthly active users, WhatApp could be a privacy-focused alternative to Facebook Blue, and it would logically introduce expanded profiles, photo-sharing capabilities and other features that would strengthen its offerings. Inside Facebook, WhatsApp has suffered from underinvestment as a potential threat to Facebook Blue and Messenger. Shareholders have suffered for it.

Beyond Instagram and WhatsApp, Facebook Blue itself is struggling. Q2’s earnings may have skyrocketed, but the increase in revenue hid a troubling sign: Ads increased by 47%, but inventory increased by just 6%. This means Facebook is struggling to find new places to run its ads. Why? The core social graph of Facebook is too old.

I fondly remember the day Facebook came to my high school; I have thousands of friends on the platform. I do not use Facebook anymore — not for political reasons, but because my friends have left. A decade ago, hundreds of people wished me happy birthday every year. This year it was 24, half of whom are over the age of 50. And I’m 32 years old. Teen girls run the social world, and many of them don’t even have Facebook on their phones.

Zuckerberg’s newfound push into the metaverse has been well covered, but the question remains: Why wouldn’t a Facebook serious about the metaverse acquire Roblox? Of course, the FTC would currently never allow it.

Facebook’s current clunky attempt at a hardware solution, with an emphasis on the workplace, shows little sign of promise. The launch was hardly propitious, as CNN reported, “While Bosworth, the Facebook executive, was in the middle of describing how he sees Workrooms as a more interactive way to gather virtually with coworkers than video chat, his avatar froze midsentence, the pixels of its digital skin turning from flesh-toned to gray. He had been disconnected.”

This is not the indomitable Facebook of yore. This is graying Facebook, freezing midsentence.

Facebook will generate more value for shareholders as three separate companies

Zuckerberg’s control of 58% of Facebook’s voting shares has forestalled a typical Wall Street reckoning: Investors are tiring of Zuckerberg’s unilateral power. Many justifiably believe the company is more valuable as the sum of its parts. The success of AT&T’s breakup is a case in point.

Five years after AT&T’s 1984 breakup, AT&T and the Baby Bells’ value had doubled compared to AT&T’s pre-breakup market capitalization. Pressure from Japanese entrants battered Western Electric’s market share, but greater competition in telephony spurred investment and innovation among the Baby Bells.

AT&T turned its focus to competing with IBM and preparing for the coming information age. A smaller AT&T became more nimble, ready to focus on the future rather than dwell on the past.

Standalone Facebook Blue, Instagram and WhatsApp could drastically change their futures by attracting talent and acquiring new technologies.

The U.K.’s recent opposition to Facebook’s $400 million GIPHY acquisition proves Facebook will struggle mightily to acquire even small bolt-ons.

Zuckerberg has always been one step ahead. And when he wasn’t, he was famously unprecious: “Copying is faster than innovating.” If he really believes in Facebook’s mission and recognizes that the situation cannot possibly get any better from here, he will copy AT&T’s solution before it is forced upon him.

Regulators are tying Zuckerberg’s hands behind his back as the company weathers body blows and uppercuts from Beijing to Silicon Valley. As Zuckerberg’s idol Augustus Caesar might have once said, carpe diem. It’s time to break Facebook.

#antitrust, #column, #congress, #facebook, #government, #instagram, #lina-khan, #mark-zuckerberg, #messenger, #opinion, #policy, #social, #social-media, #tc, #united-states, #whatsapp

The stars are aligning for federal IT open source software adoption

In recent years, the private sector has been spurning proprietary software in favor of open source software and development approaches. For good reason: The open source avenue saves money and development time by using freely available components instead of writing new code, enables new applications to be deployed quickly and eliminates vendor lock-in.

The federal government has been slower to embrace open source, however. Efforts to change are complicated by the fact that many agencies employ large legacy IT infrastructure and systems to serve millions of people and are responsible for a plethora of sensitive data. Washington spends tens of billions every year on IT, but with each agency essentially acting as its own enterprise, decision-making is far more decentralized than it would be at, say, a large bank.

While the government has made a number of moves in a more open direction in recent years, the story of open source in federal IT has often seemed more about potential than reality.

But there are several indications that this is changing and that the government is reaching its own open source adoption tipping point. The costs of producing modern applications to serve increasingly digital-savvy citizens keep rising, and agencies are budget constrained to find ways to improve service while saving taxpayer dollars.

Sheer economics dictate an increased role for open source, as do a variety of other benefits. Because its source code is publicly available, open source software encourages continuous review by others outside the initial development team to promote increased software reliability and security, and code can be easily shared for reuse by other agencies.

Here are five signs I see that the U.S. government is increasingly rallying around open source.

More dedicated resources for open source innovation

Two initiatives have gone a long way toward helping agencies advance their open source journeys.

18F, a team within the General Services Administration that acts as consultancy to help other agencies build digital services, is an ardent open source backer. Its work has included developing a new application for accessing Federal Election Commission data, as well as software that has allowed the GSA to improve its contractor hiring process.

18F — short for GSA headquarters’ address of 1800 F St. — reflects the same grassroots ethos that helped spur open source’s emergence and momentum in the private sector. “The code we create belongs to the public as a part of the public domain,” the group says on its website.

Five years ago this August, the Obama administration introduced a new Federal Source Code Policy that called on every agency to adopt an open source approach, create a source code inventory, and publish at least 20% of written code as open source. The administration also launched Code.gov, giving agencies a place to locate open source solutions that other departments are already using.

The results have been mixed, however. Most agencies are now consistent with the federal policy’s goal, though many still have work to do in implementation, according to Code.gov’s tracker. And a report by a Code.gov staffer found that some agencies were embracing open source more than others.

Still, Code.gov says the growth of open source in the federal government has gone farther than initially estimated.

A push from the new administration

The American Rescue Plan, a $1.9 trillion pandemic relief bill that President Biden signed in early March 2021, contained $9 billion for the GSA’s Technology Modernization Fund, which finances new federal technology projects. In January, the White House said upgrading federal IT infrastructure and addressing recent breaches such as the SolarWinds hack was “an urgent national security issue that cannot wait.”

It’s fair to assume open source software will form the foundation of many of these efforts, because White House technology director David Recordon is a long-time open source advocate and once led Facebook’s open source projects.

A changing skills environment

Federal IT employees who spent much of their careers working on legacy systems are starting to retire, and their successors are younger people who came of age in an open source world and are comfortable with it.

About 81% of private sector hiring managers surveyed by the Linux Foundation said hiring open source talent is a priority and that they’re more likely than ever to seek out professionals with certifications. You can be sure the public sector is increasingly mirroring this trend as it recognizes a need for talent to support open source’s growing foothold.

Stronger capabilities from vendors

By partnering with the right commercial open source vendor, agencies can drive down infrastructure costs and more efficiently manage their applications. For example, vendors have made great strides in addressing security requirements laid out by policies such as the Federal Security Security Modernization Act (FISMA), Federal Information Processing Standards (FIPS) and the Federal Risk and Authorization Management Program (FedRamp), making it easy to deal with compliance.

In addition, some vendors offer powerful infrastructure automation tools and generous support packages, so federal agencies don’t have to go it alone as they accelerate their open source strategies. Linux distributions like Ubuntu provide a consistent developer experience from laptop/workstation to the cloud, and at the edge, for public clouds, containers, and physical and virtual infrastructure.

This makes application development a well-supported activity that includes 24/7 phone and web support, which provides access to world-class enterprise support teams through web portals, knowledge bases or via phone.

The pandemic effect

Whether it’s accommodating more employees working from home or meeting higher citizen demand for online services, COVID-19 has forced large swaths of the federal government to up their digital game. Open source allows legacy applications to be moved to the cloud, new applications to be developed more quickly, and IT infrastructures to adapt to rapidly changing demands.

As these signs show, the federal government continues to move rapidly from talk to action in adopting open source.

Who wins? Everyone!

#column, #developer, #federal-election-commission, #free-software, #government, #linux, #linux-foundation, #open-source-software, #open-source-technology, #opinion, #policy, #solarwinds, #ubuntu

To prevent cyberattacks, the government should limit the scope of a software bill of materials

The May 2021 executive order from the White House on improving U.S. cybersecurity includes a provision for a software bill of materials (SBOM), a formal record containing the details and supply chain relationships of various components used in building a software product.

An SBOM is the full list of every item that’s needed to build an application. It enumerates all parts, including open-source software (OSS) dependencies (direct), transitive OSS dependencies (indirect), open-source packages, vendor agents, vendor application programming interfaces (APIs) and vendor software development kits.

Software developers and vendors often create products by assembling existing open-source and commercial software components, the executive order notes. It’s useful to those who develop or manufacture software, those who select or purchase software and those who operate the software.

As the executive order describes, an SBOM enables software developers to make sure open-source and third-party components are up to date. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product. And those who operate software can use SBOMs to quickly determine whether they are at potential risk of a newly discovered vulnerability.

“A widely used, machine-readable SBOM format allows for greater benefits through automation and tool integration,” the executive order says. “The SBOMs gain greater value when collectively stored in a repository that can be easily queried by other applications and systems. Understanding the supply chain of software, obtaining an SBOM and using it to analyze known vulnerabilities are crucial in managing risk.”

An SBOM is intrinsically hierarchical. The finished product sits at the top, and the hierarchy includes all of its dependencies providing a foundation for its functionality. Any one of these parts can be exploited in this hierarchical structure, leading to a ripple effect.

Not surprisingly, given the potential impact, there has been a lot of talk about the proposed SBOM provision since the executive order was announced. This is certainly true within the cybersecurity community. Anytime there are attacks such as the ones against Equifax or Solarwinds that involve software vulnerabilities being exploited, there is renewed interest in this type of concept.

Clearly, the intention of an SBOM is good. If software vendors are not upgrading dependencies to eliminate security vulnerabilities, the thinking is we need to be able to ask the vendors to share their lists of dependencies. That way, the fear of customer or public ridicule might encourage the software producers to do a better job of upgrading dependencies.

However, this is an old and outmoded way of thinking. Modern applications and microservices use many dependencies. It’s not uncommon for a small application to use tens of dependencies, which in turn might use other dependencies. Soon the list of dependencies used by a single application can run into the hundreds. And if a modern application consists of a few hundred microservices, which is not uncommon, the list of dependencies can run into the thousands.

If a software vendor were to publish such an extensive list, how will the end users of that software really benefit? Yes, we can also ask the software vendor to publish which of the dependencies is vulnerable, and let’s say that list runs into the hundreds. Now what?

Clearly, having to upgrade hundreds of vulnerable dependencies is not a trivial task. A software vendor would be constantly deciding between adding new functionality that generates revenue and allows the company to stay ahead of its competitors versus upgrading dependencies that don’t do either.

If the government formalizes an SBOM mandate and starts to financially penalize vendors that have vulnerable dependencies, it is clear that given the complexity associated with upgrading dependencies the software vendors might choose to pay fines rather than risk losing revenue or competitive advantage in the market.

Revenue drives market capitalization, which in turn drives executive and employee compensation. Fines, as small as they are, have negligible impact on the bottom line. In a purely economic sense, the choice is fairly obvious.

In addition, software vendors typically do not want to publish lists of all their dependencies because that provides a lot of information to hackers and other bad actors as well as to competitors. It’s bad enough that cybercriminals are able to find vulnerabilities on their own. Providing lists of dependencies gives them even more possible resources to discover weaknesses.

Customers and users of the software, for their part, don’t want to know all the dependencies. What would they gain from studying a list of hundreds of dependencies? Rather, software vendors and their customers want to know which dependencies, if any, make the application vulnerable. That really is the key question.

Prioritizing software composition analysis (SCA) ensures that when dependencies are analyzed in the context of an application, the dependencies that make an application vulnerable can be dramatically reduced.

Instead of publishing a list of 1,000 dependencies, or 100 that are vulnerable, organizations can publish a far more manageable list in the single digits. That is a problem that organizations can much more easily deal with. Sometimes a software vendor can fix an issue without having to upgrade the dependency. For example, it can make changes in the code, which is not always possible if we are merely looking for the list of vulnerable dependencies.

There is no reason to disdain the concept of SBOM outright. By all means, let’s make the software vendors responsible for being transparent about what goes into their software products. Plenty of organizations have paid a steep price because of software vulnerabilities that could have been prevented in the form of data breaches and other cybersecurity attacks.

Indeed, it’s heartening to see the federal government take cybersecurity so seriously and propose ways to enhance the protection of applications and data.

However, let’s make SBOM specific to the list of dependencies that actually make the application vulnerable. This serves both the vendor and its customers by cutting directly to the sources of vulnerabilities that can do damage. That way, we can address the issues at hand without creating unnecessary burdens.

#column, #cybersecurity, #government, #hacking, #open-source-software, #opinion, #policy, #security, #solarwinds, #tc, #united-states

A California judge just struck down Prop 22: Now what?

Every time you turn around, someone new is winning the war in California around organizing workers in the sharing economy.

Labor struck first when California legislators passed Assembly Bill 5, requiring all independent contractors working for gig economy companies to be reclassified as employees. That was expected to set off a chain reaction in state legislatures nationwide, until two things happened.

First, COVID-19 hit and quickly became all-encompassing, making it virtually impossible for lawmakers and regulators to focus on anything but surviving the pandemic. Second, Uber, Lyft, Instacart and others funded and voters approved Prop 22 in California, striking down AB-5 and returning sharing economy workers to independent contractor status.

On the same day that Prop 22 passed, Democrats captured both chambers of Congress in Washington, but their margins were so slim (50-50 in the Senate and a nine-vote majority in the House), that federal legislative action on the issue was near impossible. Across the country, politicians read the tea leaves of Prop 22 and decided to mainly stay away. That kept the issue at bay during the 2021 state legislative sessions.

But the tide started to turn again this summer. First, U.S. Rep. Bobby Scott (D-Virginia) introduced the PRO Act in February 2021, stating that workers would be reclassified using an ABC test, in addition to rolling back right-to-work laws in states and establishing monetary penalties for companies and executives who violate workers’ rights.

The bill handily passed the House in March, but has since stalled in the Senate, despite receiving a hearing and energetic support by high-profile senators including Bernie Sanders and Majority Leader Chuck Schumer.

The Biden administration’s appointees to the Department of Labor and the National Labor Relations Board are decidedly in favor of full-time-worker status. And now, a California Superior Court judge has ruled Prop 22 unconstitutional, saying it violates the right of the state legislature to pass future laws around worker safety and status.

The sharing economy companies are expected to appeal, and the case could ultimately wind up before the California Supreme Court.

So now what? The courts will ultimately determine the status of sharing economy workers in California, but since the decision will be about the specific legal parameters of California’s referendum process, it won’t determine the issue elsewhere. And despite noise from Washington, Congress isn’t passing the PRO Act any time soon (Democrats may try to include it in the reconciliation for the $3.5 trillion American Families Plan, but the odds of its survival are low). That means the action returns to the states.

New York is the biggest battleground outside of California. Democrats have amassed a supermajority in both chambers of the legislature, and New York lacks a referendum vehicle to overturn state law.

Sharing economy workers are the biggest organizing opportunity for private sector unions in decades, and labor will use all of its influence to pass worker classification reform in 2022.

However, Kathy Hochul, New York’s new governor, is a moderate, and state legislators recently abandoned a half-baked plan brokered by gig companies to safeguard independent contractor status, indicating a resolution on the issue will likely take time.

Illinois is fertile ground for worker reclassification, too, but the state remains a question mark.

There’s also a chance of movement in Massachusetts, where gig companies are making a play to establish a ballot initiative very similar to Prop 22. Legislators in Seattle and Pennsylvania have also signaled an interest in exploring the issue.

And just a few months after most state legislative sessions conclude next summer, we’ll hit the midterm elections, which could produce a Republican wave (especially in the House) that would yet again quash the chances of worker classification legislation passing anywhere.

In other words, this is going to ping back and forth for at least the next few years in the courts, in state legislatures, and in the halls of Congress and federal agencies. If you’re a sharing economy investor and you want this issue resolved once and for all, that peace of mind isn’t coming. And the market, rather than accepting that this will be an unresolved issue for the next few years, will probably overreact to each individual action, whether it’s a lower court ruling or a piece of legislation making its way through a state.

In reality, the answer is the same as it’s always been: trying to shoehorn sharing economy workers into one of two existing categories — 1099 or W-2 — doesn’t work. We still need to recognize that the inherent nature of work has changed over the last decade, and we need to recognize that both parties — the sharing economy companies and the unions — are only looking out for their own interests and coffers at the expense of what’s best for actual workers.

California is not going to resolve this issue. It’s just swung back and forth from one extreme to another. Congress is not going to resolve this issue because it almost never resolves anything.

So the game comes down to states like Illinois, New York and Massachusetts. It comes down to legislators and leaders trying to craft good public policy at the expense of their donors and supporters and Twitter followers — and then it comes down to their colleagues doing the same.

It means sacrificing politics for policy. That almost never happens. And it probably won’t happen here, either. So if you’re trying to game out where this issue is going, accept the uncertainty and expect that a thoughtful, smart resolution — locally or nationally — is unlikely. It’s a dissatisfying conclusion but, sadly, it epitomizes exactly where our politics stand today.

#bernie-sanders, #biden-administration, #california, #column, #congress, #government, #illinois, #labor, #lyft, #national-labor-relations-board, #new-york, #opinion, #policy, #sharing-economy, #tc, #uber, #washington

It’s time for the VC community to stop overlooking the childcare industry

Square. Uber. Zillow. Airbnb. Besides being some of the biggest technology companies, what else do these titans have in common? They all operate in entrenched, highly fragmented, geographically localized and regulated industries. That means they required a lot of upfront venture capital investment to disrupt their respective markets. And the investment has paid off — these are now some of the most valuable companies in the world.

Venture capital alone hasn’t funded some of the largest companies. One of today’s most successful tech entrepreneurs was funded by massive infusions of investment from the federal government — Elon Musk received $4.9 billion in public subsidies for his companies, including SpaceX and Tesla. Moreover, government investment, via tax credits for electric vehicle purchases, made it more affordable for consumers to buy the green transportation they needed.

But one massive industry has not yet benefited from the large amounts of money that both venture capital and government can provide: Childcare. Families in the United States spend $136 billion on infant and child care every year, and the market is only growing. If you include school-age care and education for all children under 18, that number grows to $212 billion. In investor terms, the TAM (total addressable market) is huge.

To put things in perspective, one new company has raised more funding in 2021 than the entire childcare industry.

So where is the investment? Biden’s current compromise on an infrastructure plan does not include many provisions for childcare. Venture investment in this space is nascent and insufficient. In 2020, only $171 million was invested in care and early childhood education. The funding situation has improved in 2021, with $516 million invested in childcare, but it’s still just a tiny fraction of the $288 billion of venture capital invested so far this year.

To put that in perspective, a single new company has raised more funding in 2021 than the entire childcare industry.

Funding emerging childcare technology may require a lot of upfront capital. For starters, the industry is regulated and safety is and should remain a priority. Caring for and educating young children takes training, skill and love — it cannot be done by a computer.

But there are so many facets of the industry that are ripe for innovation. Parents sometimes take weeks to find a childcare provider that meets their needs. In some markets, there is not nearly enough supply (three children for every licensed slot) to meet the demand. Assessing quality, pricing and availability is challenging, and payments and business operations tools for the nation’s 300,000+ daycares are still often pen, paper and Excel spreadsheet affairs.

This industry just needs patient investors with long-term perspectives.

This is a great time to diversify investment portfolios and support relatively recession-proof companies meaningfully expanding access to childcare. COVID has finally started to bring this largely offline industry online. Parents are now willing to go digital for childcare decisions and providers are adopting new online technologies at a record pace. These tailwinds provide the perfect conditions for startups.

Solving this problem is a huge business opportunity that affects so much else. When the millions of parents with young children can’t find care, they can’t work. We saw this over and over again since the start of the pandemic. The average American family can spend up to 25% of their income on early childhood care, while the average care worker makes approximately $12 an hour.

Unlocking innovation here at scale will require public and private investment. Government shapes and enables markets, from the explosion of technology that followed from Kennedy’s investment in the space race to more recent fundamental investments in wind, solar and electric vehicles. NASA catalyzed dozens of new technologies in the 1960s because it had both a generous budget and the flexibility to work with the best private-sector contractors available to solve specific problems.

The revitalization of the childcare sector would benefit from an ambitious and galvanizing “moonshot” goal, like providing universal, free childcare for all Americans.

By collaborating with flexibility and creativity across the public and private sectors, we can achieve a basic shared goal that other democracies have already fulfilled — the accessible provision of high-quality childcare for all members of society.

#child-care, #column, #corporate-finance, #covid-19, #diversity, #economy, #federal-government, #opinion, #startups, #tc, #tesla, #uber, #venture-capital