Alexander J. Díaz Thomas* 76 U. Miami L. Rev. Caveat 1 (2022).
This article addresses the issues relating to the use of Artificial Intelligence (“AI”) in venture capital investing. Specifically, it addresses the use of discriminatory and biased AI in venture capital and assesses the effects that a lack of funding could have on historically marginalized communities. Next, it illustrates how and why AI––an algorithm–could discriminate based on gender and race in the venture capital setting.
This article concludes by proposing that Congress designate an agency to enforce and adopt a governing regulatory framework for AI discrimination in venture capital, in both the long term and short term. These recommendations are based on state, historical, and international approaches to regulating AI and other emerging technologies.
Introduction
“By 2025, more than 75% of venture capital and early-stage investor executive reviews will be informed by AI and data analytics.”[1] Historically, startups seeking immediate cash have gone to venture capital (“VC”) firms and physically presented their products to secure capital.[2] After the startup pitches the idea, the firm––predominantly white, non-Hispanic, male-dominated––determines whether they wish to invest in the company.[3]
Traditionally, race and gender play, at a minimum, a subconscious role in determining whether funding is awarded to the fledgling company.[4] VC funds are likely not blatantly and consciously discriminating based on race and gender; however, race and gender inform their decisions, as they make “gut” decisions whether to invest based on the pitch. For example, an academic study found that “[i]nvestors prefer pitches presented by male entrepreneurs compared with pitches made by female entrepreneurs, even when the content of the pitch is the same.”[5] Moreover, as of 2018, only 3% of funding went to women-run businesses.[6] A plethora of explanations exist for the asymmetries in VC funding to women of minority-led startups versus to white male startups such as (1) the fact that historically only men pitched to VC funds,[7] (2) the confidence gender gap,[8] and (3) Mirrortocracy.[9] In light of the statistical evidence and prevailing psychological research, the reality is that as long as these “gut decisions” continue to be the determining factor in the white-male dominated VC space, minority founders, especially women, will continue to be shut out. Melinda Gates notes that “[f]or a long time, venture capital has been an industry that funnels money from white men to white men,”[10] and this history is poised to repeat itself.
As recently as 2019, thirty-eight percent of global venture capital firms used AI in determining whether a startup should receive funding.[11] However, as noted above, that number is expected to continue growing, as the use of AI in decision making becomes an industry standard. The issue with utilizing AI is that it results in digital discrimination, which manifests as bias against minorities and women.[12] As a result of AI bias, women and minority-led startups, who are already underfunded, receive even less funding.[13] This article analyzes the pitfalls of VC AI investing in tech startups and suggests methods for regulating AI bias in VC funding within the existing legal framework.
To understand how the AI bias in VC investing operates, one must first understand (1) how AI functions, and (2) the practical non-existence of diverse decision-makers in venture capital firms.[14]
I. Historical VC Investing
First, understanding the historical backdrop and demographics surrounding the world of VC as well as what AI was brought into to reshape proves crucial in understanding the suggested framework for this article.[15]
A. How VC Works and Demographics
Historically, it was common for start-up companies attempting to secure capital for funding their businesses to walk into large conference rooms dominated by––white, non-Hispanic, male—investors and attempt to sell the investors on their product and company.[16] It was a simple transaction of investors’ cash for shares in the fledgling company.[17] The investors, trusting the presentation or “pitch” by the startup, would then review the company’s books, assess the profitability of the investment, and await their return on investment (“ROI”), mainly in the form of an increase of the share price.[18] The demographics in this high-risk, high-reward world of VC investing remain largely unchanged.[19] Tellingly, dollars often go to those whom the fund manager’s trust.[20] A large part of trust is based on social conditioning factors such as race, academic pedigree, and gender.[21] Therefore, these factors are often determinative of which founders the fund managers trust.[22] This article posits that due to the stress in such a frenzied and high-risk environment, trust was––and still is––a commodity reserved to those who look the part.[23] Leading one to conclude that, today, more white non-Hispanic founders receive funding than minority or female founders.[24]
B. What Is AI’s Role in VC and How Does AI Operate?
As of 2019, thirty-eight percent of global venture capital firms used artificial intelligence to assess risk and augment insight by analyzing available data. Estimates suggest that “[b]y 2025 more than 75% of venture capital and early-stage investor executive reviews will be informed by AI and data analytics.”[25] There are three crucial reasons why the majority of venture capital firms will shift to using AI-augmented analyses in determining funding.[26] First, AI is, simply put, better at augmenting insights from data versus novice human analysts, as AI is not as liable to cognitive biases.[27] Second, AI speeds up the investment process.[28] Lastly, AI, on average, outperforms human investors.[29]
AI operates by being fed historical data, determining precedents, and predicting future outcomes based on previously learned data.[30] Then, “it just extrapolates the patterns that exist in the real world data that we give it to learn to exploit these patterns in order to distinguish between the potential decision alternatives.”[31]Therefore, when a fledgling startup comes to a VC firm, the AI, after having been fed the information about past startups, can compare against its memory and decide whether capital should be granted or not.[32] When compared to novice angel investors, an algorithmic-based investing program’s results did not just “outperform the human average [rate of internal return],” but “produced an increase[d] [rate of internal return] of more than 184% over the human average.”[33] The addition of AI in the early stages of investing can serve as a tool that also speeds up the process of VC funding.[34] As such, AI will likely be used in this space to augment human analysis rather than replace it.
C. Inherently Biased AI, and Its Effects on VC Investing
Multiple studies point out that VC is inherently biased––whether consciously or not––against both women[35] and people of color.[36] While many potential reasons for this documented bias exist, two likely reasons are lack of representation and pattern recognition. VC funds are––and were––a sea of white non-Hispanic men.[37] While the exact causes of bias in VC funds are debatable, the fact that the bias exists does not lead to the creation of discriminatory datasets.[38] Using these existing biased datasets in their current forms to train the AI would teach the AI to reproduce the social inequalities inherent in the datasets. The AI would perceive the historical patterns of biased funding and perpetuate gendered and racial inequality in VC funding.[39] Thus, in its current form, AI, as utilized by VC firms, is highly susceptible to bias due to training the AI on discriminatory datasets.[40]
Sequoia provides a recent example of knowingly deploying inherently biased AI.[41] In attempting to reduce bias, CB Insights, the firm that trained the algorithm, likely exacerbated bias through its new AI software, “Management Mosaic.”[42] However, even though the AI was trained without labels and classes of race, style, and gender, by utilizing historical data, the company failed to take into consideration the role played by access to capital, educational opportunities, and social networks, which historically were available exclusively to white males,[43] and how they affect the algorithm’s success predictors.[44] The Algorithmic Justice League (“AJL”) notes that claiming that the software will not be prone to bias because there are no gender or racial classes to classify data into in the AI’s training is “extremely naive.”[45] The AJL worry is not unfounded, as the AI will likely perform its task; the AI will discover the underlying pattern of historical racial and gender discrimination that permeates the history of funding by VC firms.[46] Next, it will correlate these results with other factors that serve as proxies to race and gender.[47] Therefore, by attempting to eliminate bias in VC through AI investing, CB Insights, by not labeling or using classified datasets like many other Financial Technology AI companies, is automating discrimination of sex and race through proxy factors that the AI has correlated to race and sex.[48]
II. Lay of the Land: What Are Regulators Doing About Bias in VC Investing?
The Federal Trade Commission (“FTC”) has historically poised itself as the United States’ regulator for new and emerging technologies.[49] It bases its authority to regulate from the broad language of section 5 of the Federal Trade Commission Act, 15 U.S.C. § 45, which prohibits “unfair or deceptive acts or practices in or affecting commerce.”[50] The FTC, through its broad section 5 powers to regulate unfair commercial practices, believes it can fill the regulatory gap AI has left open.[51]
In an official post by the FTC in April 2021, the FTC clarified that it would now regulate discrimination based on protected class in AI:[52] “It’s essential to test your algorithm—both before you use it and periodically after that—to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.”[53] The FTC takes the position that unfair or deceptive practices violative of section 5 “include the sale or use of—for example—racially biased algorithms.”[54]
The blog post clarifies that the FTC aims to investigate the use of biased AI.[55] Prior to the FTC guidance, it was unclear how the regulators could enforce AI discrimination on the basis of a protected class. Additionally, the FTC lays the basic framework that it expects companies to follow and issues a warning to companies to “keep in mind that if you don’t hold yourself accountable, the FTC may do it for you.”[56] The basic framework requires companies to rely on racial inclusive datasets, to frequently test the AI to ensure it “doesn’t discriminate based on race, gender, or other protected class,”[57] and to “be careful not to overpromise what your algorithm can deliver.”[58] The FTC places the onus on the third-party AI programmer-provider and user (in this case, the VC firm) to ensure––through periodic testing by the firm and training with carefully curated datasets by the provider[59]––the AI is not racially biased.[60]
In 2019, the Trump Administration issued Executive Order 13,859, which promoted AI and removed barriers to its use.[61] In response to the Executive Order, the acting director of the Office of Management and Budget issued a memo entitled “Memorandum For the Head of Executive Departments” that agreed with the position now expressed by the FTC.[62] With respect to AI that could “produce[] discriminatory outcomes or decisions that undermine public trust and confidence in AI” and “[w]hen considering regulations or non-regulatory approaches related to AI applications, agencies should consider, in accordance with law, issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application . . . .”[63] Additionally, instructions were given for agencies to “consider using any existing statutory authority to issue non-regulatory policy statements, guidance, or testing and deployment frameworks, as a means of encouraging AI innovation in that sector.”[64]The Trump administration’s actions demonstrated a lack of concern with bias and risk mitigation, and instead promoted industry over employee safety. This approach was much like the government policy of limiting the liability of railroad companies at the turn of the 20th century.[65]
In September 2021, President Biden’s Administration expressed a desire to evaluate and develop the use of AI that are trustworthy and respects universal human rights and shared democratic values in conjunction with the European Union (“EU”).[66] Based on the Organization for Economic Co-operation and Development’s (“OECD”) recommendation for AI, the executive branch indicated that it is moving to conduct a study with the EU about the impact of AI on the labor market.[67]
Individual states also have locally attempted to regulate for bias in AI.[68] For example, the state of Washington is attempting to pass Substitute House Bill 1655, which makes it an unfair practice “for any automated decision system to discriminate against an individual, or to treat an individual less favorably than another, in whole or in part, on the basis of [being in a protected class].”[69] Similarly, California–America’s leading state-level tech regulator––has proposed a bill that would require the developer of the software that will be used by the state to “describe any potential disparate impacts on the basis of characteristics [of members of a protected class].”[70] However, CAB 13, the proposed California bill, places a higher burden on the reporter. It would require developers and users to describe any potential disparate impacts from the use of the software within the scope of its use.[71] The bill also extends the reporting obligation outside the scope of “reasonably foreseeable capabilities of the programs intended use.”[72]This means that the California legislature intends to put users on notice that if their AI could result in any discrimination against any member of a protected class,[73] even outside the scope of its purported use, the user and developer of the AI must report the material (discriminatory) effects of biased AI to the state government.[74] Whether the nascent Californian legislation is likely to succeed, and will be effectively enforceable, remains to be seen.
Additionally, the EU’s proposed AI governance framework—similar to their data privacy framework known as the General Data Protection Regulation (“GDPR”)—posits a comprehensive governance framework for AI.[75] The EU’s statute has a long-arm effect that extends protections to wherever an EU citizen is located. In other words, wherever AI is used to process information about an EU citizen, the proposed legislation would grant the EU citizen protections. This long arm statute would apply regardless of whether the developer or user of the AI has a physical presence within the EU.[76]
The European legislation uses a risk-based approach in defining what obligations attach to the AI.[77] Notably, an AI falls under the high-risk scheme of the statute if the AI creates “a high risk to the health and safety or fundamental rights of natural persons.”[78] A fundamental right of natural persons within the EU is the right to non-discrimination.[79] Therefore, much like the GDPR, American VC funds utilizing AI on European Citizens must comply with the EU’s AI framework should they work with any citizen of the EU. For example, if the VC fund utilizes an algorithm that infringes on the European’s right to non-discrimination, the VC firm, due to the statute’s extraterritorial reach, would be infringing on the European citizen’s fundamental right.[80] Accordingly, the fund would be penalized by the European Artificial Intelligence Board (“EAIB”), the enforcer of the AI framework in Europe, and national member state regulators.[81]
In sum, since the United States’ model does not reflect the European framework, because there are no specific delegations from Congress explicitly allowing the FTC to govern AI bias. Additionally, as there are no proposed Congressional bills to fill in the regulatory gap, states have been forced to create their individual frameworks for regulating AI biases.[82] State frameworks are based mainly on the European framework, resulting in a patchwork of overlapping state, federal, and international regulations with concurrent jurisdiction.[83]
III. Short Terms Solutions within the Current Regulatory Framework
While there are many short-term solutions to attempt to fill the gap, this article proposes two solutions: (1) promoting judicially enforceable soft law until Congress can address this issue, and (2) expanding the Trust and Data Alliance and social pushback against VC firms known to be using inherently biased data.
First, soft law has historically played a considerable part in industry regulation in the face of congressional silence.[84] Soft law refers to applying a patchwork of legally non-binding instruments that, while not directly enforceable by the government, set substantive expectations, such as “professional guidelines, private standards, codes of conduct and best practices.”[85] This patchwork of non-enforceable standards is perfectly imperfect to handle the governance issues AI presents in the face of congressional silence because of its speed and flexibility to implement and operate.[86]
The first reason judicially enforceable soft law is best suited to this task is because soft law can effectively keep up with the speed of AI, whereas Congress cannot.[87] In short, Congress was born in the 18th century and traditional congressional action operates on an antiquated and lengthy system of regulation that requires, in many cases, years before action is taken and effective regulation occurs.[88] Whereas AI, a recent 21st-century development, is changing and developing by the day and can outpace congressional responses to effectively govern it.[89] While Congress has created an expert task force to deal directly with and study AI,[90] AI’s sheer rate of growth and development will continue to outpace Congress’ ability to effectively understand AI, let alone regulate it or its bias in the traditional way.[91] Congress must provide a solution to regulate today’s issues with AI. Soft law is one potential solution until congressional leaders can clarify how to respond to biased AI.[92] Industry-specific soft law, created by industry experts with a deep understanding of the subject matter, provides a quick, workable, and flexible approach that can respond to the ever-changing world of AI.[93] Soft law can solve today’s problems and take immediate action while Congress engages in the bureaucracy of legislating.[94]
Another reason judicially enforceable industry-specific soft law is best suited to this task is the complexity––both technical and structural––of AI, and because of the diverse AI interactions with different industries.[95] This is why, as discussed above, soft law created by industry experts, in both the technical and financial sectors, should, together with a strong judiciary, using already existing common law legal principles of fairness and equality, create––and enforce––industry standards, on a case-by-case basis, for the regulation of bias in AI-assisted VC.
Second, another possible short-term solution for the regulation of AI bias in VC-assisted investing is the expansion of the Data and Trust Alliance (“DTA”). The DTA, a private organization, was founded to “implement something concrete.”[96] The DTA, composed of corporate leaders, developed a program to evaluate and score whether their AI programs are likely to produce bias, and should the AI produce biased results, allow the individual organization to combat or mitigate the bias.[97] This program is currently being deployed by the signatories of the DTA, which include CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta (Facebook’s parent company), Nike and Walmart.”[98]This means that companies that large parts of the American public utilize have agreed to examine and test their data for bias and correct any biases their AI’s exhibit.
The DTA correctly posits that, “[regulating] ultimately needs to be done by an independent authority;”[99] however, in the short term, the court of public opinion can significantly influence industry action absent an independent authority. Public pressure, in the era of environmental, social, and governance (“ESG”) investing[100] and social media, has proven time and time again to carry significant sway in regulating company and industry actions.[101] This article proposes that, at least in the short term, absent congressional action, public outcry, coupled with judicially enforceable soft law, could enable VC-assisted AI firms, who are signatories of the DTA, to mitigate their use of inherently biased AI.[102] Thus, expanding the reach and effectiveness of the DTA’s program and testing AI systems with public pressure could mitigate the use of biased AI in VC investing.
IV. Long Term Solution to the Mitigation of VC Assisted Investing Beyond the Regulatory Horizon
The long-term solution to the mitigation of AI-assisted VC investing that this article posits is to promote the Americanization of the European Framework, while using a version of the DTA’s AI bias evaluation for bias detection. Similar to data privacy regulation, the Europeans have taken the lead in the regulation of AI bias, having already proposed and drafted a broad and overarching piece of general legislation.[103] The EU’s proposed legislation uses a four-tiered risk-based approach in defining what obligations attach to the AI.[104] A VC using biased AI to determine whether to grant funds likely falls within the high-risk tier of the statute, as the AI creates “a high risk to the health and safety or fundamental rights of natural persons.”[105] A fundamental right of natural persons within the EU is the right to non-discrimination.[106] Therefore, much like the GDPR, American VC funds utilizing AI on European citizens must comply with the EU’s AI framework should they work with any citizen of the EU. Similar to the European model, an American AI governance model could establish that it would constitute a violation to employ AI that discriminates on the basis of protected class in venture capital funding. Three main steps are needed to operationalize the European model in the U.S.
The first step for long-term regulation of biased AI-assisted VC investing is Congress. Congress should clearly establish that the FTC, or other competent agency, has broad authority and discretion to regulate AI, and extend that language to AI bias in VC, as the European’s established the EAIB, and member states.[107] The absence of a clearly defined federal regulator permits industry non-compliance and dubious enforcement authority.[108]
For the second step, following California’s approach, federal regulators should base the system of regulation off of the EU’s model and establish that AI bias that discriminates against an individual on the basis of protected status, constitutes illegal discrimination, expanding the protection of anti-discrimination on the basis of protected class from employment opportunities to VC funding.[109] This would establish that, in the U.S., AI discrimination against a person––much like in the EU––would violate a fundamental right of an individual, allowing for legal action to be taken.[110]
For the third step, like the IRS, Congress should allow the agency to audit both (1) the datasets used to train the AI, and (2) the algorithm itself, in order to ensure that private entities are in compliance with the proposed federal law.[111] This puts the onus on the VC firms to ensure that their data sets are in compliance with federal law. Lastly, a clearly defined cause of action should be developed in case of non-compliance with the proposed federal obligation.[112] This would allow individuals to bring suit should the VC fund choose not to comply and exhibit bias towards people on the basis of a protected class.
Conclusion
In sum, while there is great fear that AI-assisted VC investing will lead to bias in VC investing, this article shows that it is not necessarily the case if proper safeguards are implemented. As explained above, regulatory tools currently exist to act against discrimination in AI-assisted VC investing.[113] The question becomes whether congressional leaders would take action to allow those tools to apply to AI bias, and, absent congressional action, what actions industry experts could take. Thus, AI in VC investing actually provides a moment for leaders to remove the historical bias that has long existed in VC investing and allows leaders to take a chance on equality within the high-risk, high-reward world of VC investing.
* JD Candidate 2023, University of Miami School of Law; B.A. 2019, Boston University. I would like to thank Daniel Mayor for his patience throughout the editing process. Further, I would like to thank Alfredo Daly for his help and encouragement during the writing of this article.
[1] Kyle Wiggers, Gartner: 75% of VCs Will Use AI to Make Investment Decisions by 2025, VentureBeat (Mar. 10, 2021, 12:30 AM) https://venturebeat.com/2021/03/10/gartner-75-of-vcs-will-use-ai-to-make-investment-decisions-by-2025/.
[2] Richard D. Harroch & Mike Sullivan, Startup Financing: 5 Key Funding Options for Your Company, Forbes (Dec. 22, 2019, 3:47 PM), https://www.forbes.com/sites/allbusiness/2019/12/22/startup-financing-key-options/?sh=18d0b9a32a84 (describing the process for obtaining venture capital).
[3] Melinda Gates, The VC Industry Funnels Money to White Men, Wired (Apr. 15, 2019, 6:00 AM), https://www.wired.co.uk/article/melinda-gates-venture-capital-diversity (reporting that as of 2017 venture capitalists were 82% male and 70% white).
[4] Kamal Hassan et al., How the VC Pitch Process is Failing Female Entrepreneurs, Harv. Bus. Rev. (Jan. 13, 2020) https://hbr.org/2020/01/how-the-vc-pitch-process-is-failing-female-entrepreneurs (arguing that pitching should be ditched as VC’s know that pitching “leads to selecting startups based on gender and looks”).
[6] See id.; see also Stacy Francis, 11 Tips for 11 Million Women – How Female Entrepreneurs Can Beat the Odds, CNBC, https://www.cnbc.com/2019/10/21/how-todays-11-million-female-entrepreneurs-can-beat-the-odds.html Oct. 21 2019 11:41 AM) (reporting “that since 2007 the number of women-owned firms has grown at give times the national average” and that women currently own “38% of U.S. small businesses”).
[7] Women & Pub. Pol’y Program, Harv. Kennedy Sch., Advancing Gender Equality in Venture Capital 29 (2019) (“Women are fundamentally disadvantaged in pursuing male-stereotyped roles, such as leadership and venture investing, because there exists a perceived incongruity between the attributes of women and the requirements of those roles.”).
[8] See Hassan et al., supra note 4 (describing the confidence gender gap as a situation where “women tend to undervalue themselves compared to men in competitive situations, and consequently come off to potential investors as ‘less sure of themselves’”).
[9] Richard Kerby, Where Did You Go to School?, Medium (July 30, 2018), https://medium.com/@kerby/where-did-you-go-to-school-bde54d846188 (explaining that Mirrortocracy is a form of pattern matching based on previous characteristics of a successful group success such as gender, educational institution, race, etc.).
[11] Ciaran Daly, Only 38% of VCs Currently Use Data, AI to Evaluate Investment Opportunities, AI Bus. (Feb. 14, 2019) https://aibusiness.com/document.asp?doc_id=760814.
[12] Ferrer et al., Bias and Discrimination in AI: A Cross-Disciplinary Perspective, IEEE Tech. & Soc’y Mag., June 2021, at 72, 72 (“Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence (AI) techniques . . . .”).
[13] See Elizabeth Edwards, Check Your Stats: The Lack of Diversity in Venture Capital is Worse than it Looks, Forbes (Feb. 24, 2021, 1:48 PM), https://www.forbes.com/sites/elizabethedwards/2021/02/24/check-your-stats-the-lack-of-diversity-in-venture-capital-is-worse-than-it-looks/?sh=14529f0c185d (“[J]ust 1% of the $70 trillion wealth management industry is controlled by women or minority fund managers, which often directly impacts the number of dollars invested in female and underrepresented founders.”).
[14] See Edwards, supra note 13 (“[While] 58% of the people who work in the venture capital industry are white men, the more important statistic is that white men control 93% of the venture capital dollars.”).
[15] See generally Bob Zider, How Venture Capital Works, Harv. Bus. Rev. (Nov.–Dec. 1998) https://hbr.org/1998/11/how-venture-capital-works.
[16] See Gates, supra note 3 (noting that 82% of venture capitalists are male and that 70% are white); see also supra note 11 and accompanying text; see generally Zider, supra note 15 (explaining that the VC needs to be convinced that the idea and the team they are investing in is worth it).
[17] Zider, supra note 15 (“In a typical start-up deal, for example, the venture capital fund will invest $3 million in exchange for a 40% preferred-equity ownership position, although recent valuation have been much higher.”).
[18] See Zider, supra note 15 (explaining the mechanics of VC funding and investing).
[19] See Women & Pub. Pol’y, supra note 7, at 9 (“Venture capital is historically male-dominated, which has led to a staggering overrepresentation of men as VCs today. Only 21% of all investment professionals and approximately 11% of investing partners are women. Around three-quarters of U.S. VC firms do not have single female partner.”); see also id. at 42 (“Women are also underrepresented as participants in VC deals [as entrepreneurs] with only 5.9% of U.S. deals involving all-female founding teams or solo female founders and 15.2% involving mixed-gender founding teams.”).
[20] See Harroch & Sullivan, supra note 2 (“The best way to get the attentionof a VC is to have a warm introduction thorugh one of their trust colleagues[.]”).
[21] See Anthony M. Evans & Joachim I. Krueger, The Psychology (and Economics) of Trust, Soc. & Personality Psych. Compass, 2009, 1003–17, 1011 (“When the decision to trust occurs in an intergroup context, the general tendency to favor the ingroup becomes relevant . . . . This form of trust is motivated by the expectations that members of the same group will reciprocate and cooperate with one another.”).
[23] Kenny Herzog, Women and Minority Founders Still Vastly Underfunded, New Report Finds, Entrepreneur.com (2021), https://www.entrepreneur.com/article/363874 (last visited Feb 6, 2022), (“Most VC backers still disproportionately allocate money to companies launched by white men.”).
[24] See, e.g, Ye Zhang, Discrimination in the Venture Capital Industry: Evidence from Two Randomized Controlled Trials 1 (Colum. Univ. Econ. Dep’t, Working Paper No. 4, 2020), https://econ.columbia.edu/wp-content/uploads/sites/41/2020/09/Ye_JMP-1.pdf (“Investors are biased towards female, Asian, and older founders in ‘lower contact interest’ situations; while biased against female, Asian, and older founders in ‘higher contact interest’ situations. []These two experiments identify multiple coexisting sources of bias. Specifically, statistical discrimination is an important reason for ‘anti-minority’ investors’ contact and investment decisions, which was proved by a newly developed consistent decision-based heterogeneous effect estimator.”).
[26] See generally Xiao Jean Chen, How AI Is Transforming Venture Capital, BRINK (June 14, 2021), https://www.brinknews.com/how-ai-is-transforming-venture-capital/.
[27] Torben Antretter et al., Do Algorithms Make Better — and Fairer — Investments Than Angel Investors?, Harv. Bus. Rev. (Nov. 2, 2020), https://hbr.org/2020/11/do-algorithms-make-better-and-fairer-investments-than-angel-investors (“According to our research, novice investors are easily outperformed by the algorithm — with their limited investment experience, they showed much higher signs of cognitive biases in their decision making.”).
[28] Jared Council, VC Firms Have Long Backed AI. Now, They Are Using It., WALL ST. J. (Mar. 25, 2021, 7:00 AM), https://www.wsj.com/articles/vc-firms-have-long-backed-ai-now-they-are-using-it-11616670000 (reporting that, through the use of AI, investment decision turnaround time went from two weeks to twenty-four hours).
[29] Antretter et al., supra note 27 (“While the algorithm achieved an average internal rate of return (IRR) of 7.26%, the 255 angel investors — on average — yielded IRRs of 2.56%. Put another way, the algorithm produced an increase of more than 184% over the human average.”).
[30] See generally Sara Brown, Machine Learning, Explained, MIT MGMT. (Apr. 21, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained/ (explaining how AI and machine learning work); Bernard Marr, What Is the Difference Between Artificial Intelligence and Machine Learning?, Forbes (Dec. 6, 2016, 2:24 AM), https://www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/ (explaining that AI is “the broader concept of machines being able to carry out tasks in a way that we would consider ‘smart”‘ while machine learning is “a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves”).
[31] Antretter et al., supra note 27.
[32] For example, Correlation Ventures, a San Francisco-based VC fund, uses AI to speed up the investment process by having the AI “review[] information extracted by humans from . . . materials submitted by startups.” See Council, supra note 28. According to managing director David Coats, “[t]he information is fed into an algorithm trained on data from more than 100,000 venture financing rounds . . . .The algorithm identifies how factors such as team experience or board composition correlate with future investor returns.” Id.
[33] Antretter et al., supra note 27.
[34] See Council, supra note 28.
[35] See, e.g., Michael Ewens & Richard Townsend, Are Early Stage Investors Biased Against Women?, Harv. L. Sch. F. on Corp. Governance (Sept. 20, 2019), https://corpgov.law.harvard.edu/2019/09/20/are-early-stage-investors-biased-against-women/ (finding that male investors express less interest in female entrepreneurs compared to observably similar male entrepreneurs).
[36] See, e.g., John K. Paglia & Maretno A. Harjoto, The Effects of Private Equity and Venture Capital on Sales and Employment Growth in Small and Medium-Sized Businesses, 47J. Banking & Fin. 177, 189 (2014) (“[O]wners who are considered as minority . . . are 21.7% . . . less likely to receive PE funding.”); Zhang, supra note 24, at 1 (citing Paul A. Gompers & Sophie Q. Wang, Diversity in Innovation 61 (Nat’l Bureau of Econ. Rsch., Working Paper No. 23082, 2017)) (“87% of U.S. venture capitalists are white, and investors may also have unconscious bias against minority founders.”).
[37] See Daniel Applewhite, Founders and Venture Capital: Racism is Costing us Billions,Forbes (Feb. 15, 2018, 8:00 AM),https://www.forbes.com/sites/forbesnonprofitcouncil/2018/02/15/founders-and-venture-capital-racism-is-costing-us-billions/ (“Pattern recognition has enabled VC’s to mitigate risk but has also limited their profit potential and created an inherent funding bias. This bias stems from barriers to early-stage capital, a lack of representation in the investing space and is perpetuated by systems of racism that destroy opportunity within communities of color.”).
[38] Gené Teare, The Conversation and the Data: A Look at Funding to Black Founders,crunchbase news (June 5, 2020), https://news.crunchbase.com/news/the-conversation-and-the-data-a-look-at-funding-to-black-founders/ (“[J]ust 1 percent of venture-funded startup founders are black . . . .”).
[39] See Fed. Trade Comm’n, Big Data: A Tool for Inclusion or Exclusion (2016) (noting that commentators fear that AI could “create or reinforce enforce existing disparities”); Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, REUTERS (Oct. 10, 2018, 7:04 PM), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (“Amazon’s [AI] were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain’ . . . .”).
[40] Hayden Field, An AI Tool Meant to Reduce Bias in Investing Could Exacerbate it, Experts Say, MORNING BREW (Dec. 17, 2021), https://www.morningbrew.com/emerging-tech/stories/2021/12/17/a-new-tool-meant-to-reduce-bias-in-investing-could-exacerbate-it-experts-say.
[41] See, e.g., id. (“The algorithm’s basic premise is simple: Input a startup team’s background (résumé milestones and other criteria), and out comes a prediction of their ‘success’ likelihood—implying how good an investment the individuals themselves might make.”).
[42] Id. CB Insights’s AI software, Management Mosaic, is a prediction algorithm for scoring early-stage founders and management teams, to help the company’s roster of 1,000+ clients including Cisco, Salesforce, and Sequoia expedite investment, purchasing, and M&A decisions . . . .The algorithm is trained on so-called signals of success, according to the company—from a founder’s educational institution to their “network quality.” But because much of the data is historical, it could be particularly prone to bias. Id.
[43] Kerby, supra note 9 (finding that 40% of venture capitalists came from Harvard or Stanford).
[44] See Field, supra note 40; see also Courtney Hinkle, Note, The Modern Lie Detector: Ai-Powered Affect Screening and the Employee Polygraph Protection Act (EPPA), 109 Geo. L.J. 1201, (2021) (“[W]here so much historical discrimination is carried forward into present-day discrimination, and so much about ourselves—from our education, our employment history, or even our tastes in music—is determined by our race, gender, or national origin, it may prove impossible for any algorithm to hermetically seal off consideration of these factors.”).
[46] See Ignacio N. Cofone, Algorithmic Discrimination Is an Information Problem, 70 Hastings L.J. 1389, 1389, 1401 (2019) (“To avoid disparate treatment, the protected category attributes cannot be considered; but to avoid disparate impact, they must be considered.”).
[47] See Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1025, 103738 (2017) (contending that the main problem with algorithmic discrimination is that the algorithms glean bias from data that already shows discriminatory effects, even if not facially apparent); see also Robyn Caplan et al., Algorithmic Accountability: A Primer, Data&Society (Apr. 18, 2018), https://datasociety.net/library/algorithmic-accountability-a-primer/ (“[A]lgorithmic systems can make decisions on the basis of protected attributes like race, income, or gender–even when those attributes are not referenced explicitly–because there are many effective proxies for the same information.”).
[48] See Exec. Off. of the President, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights (2016) (“Data-analytics companies are creating new kinds of candidate scores by using diverse and novel sources of information on job candidates. These sources, and the algorithms used to develop them, sometimes use factors that could closely align with race or other protected characteristics, or may be unreliable in predicting success of an individual at a job.”); see also Antretter et al., supra note 27 (“[T]he societal mechanisms that make ventures of female and non-white founders die at an earlier stage are just projected by the AI into a vicious cycle of future discrimination.”); Jonas Lerman, Big Data and Its Exclusions, 66 Stan. L. Rev. Online 55, 59 (2013) (“In a future where big data, and the predictions it makes possible, will fundamentally reorder government and the marketplace, the exclusion of poor and otherwise marginalized people from datasets has troubling implications for economic opportunity, social mobility, and democratic participation. These technologies may create a new kind of voicelessness, where certain groups’ preferences and behaviors receive little or no consideration when powerful actors decide how to distribute goods and services and how to reform public and private institutions.”).
[49] See generally Shari Claire Lewis, Here’s How the FTC Is Tackling Emerging Technology, Law.com (June 4, 2021), https://www.law.com/newyorklawjournal/2021/06/14/heres-how-the-ftc-is-tackling-emerging-technology/.
[50] 15 U.S.C. § 45(a)(1); see also Bret S. Cohen et al., FTC Authority to Regulate Artificial Intelligence,Reuters (July 8, 2021, 1:26 PM), https://www.reuters.com/legal/legalindustry/ftc-authority-regulate-artificial-intelligence-2021-07-08/ (“Section 5 prohibits unfair or deceptive acts or practices in or affecting commerce. An act or practice is considered deceptive if there is a statement, omission or other practice that is likely to mislead a consumer acting reasonably under the circumstances, causing harm to the consumer. An act or practice is considered unfair if it is likely to cause consumers substantial harm not outweighed by benefits to consumers, or to create competition circumstances where consumers cannot reasonably avoid the harm.”).
[51] See Abbey Stemler, Regulation 2.0: The Marriage of New Governance and “Lex Informatica”, 19 Vand. J. Ent. & Tech. L. 87, 104–05 n.97 (2016) (“[L]ack of formal delegation does not mean that business-developed market rules do not have bite, that they do not obtain legal standing. Firms that violate self-stated business terms to the detriment of consumers, for example, can be held liable by the Federal Trade Commission (FTC) under broad statutes banning deceptive business practice.”); see also Elisa Jillson, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI, Fed. Trade Comm’n (Apr. 19, 2021), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai (“Fortunately, while the sophisticated technology may be new, the FTC’s attention to automated decision making is not. The FTC has decades of experience enforcing three laws important to developers and users of AI . . . .”).
[52] See, e.g, S.R. 1943, 219th Leg., 1st Ann. Sess. (N.J. 2020) (defining a member of a protected class as “an individual who has one or more characteristics, including race, creed, color, national origin, nationality, ancestry, age, marital status, civil union status, domestic partnership status, affectional or sexual orientation, genetic information, pregnancy, sex, gender identity or expression, disability or atypical hereditary cellular or blood trait of any individual, or liability for service in the armed forces, for which the individual is provided protections against discriminatory practices”).
[55] Id.; see also Esther Ajao, FTC Pursues AI Regulation, Bans Biased Algorithms, TechTarget(Oct. 19, 2021) https://www.techtarget.com/searchenterpriseai/feature/FTC-pursues-AI-regulation-bans-biased-algorithms (“The FTC has also clarified that the sale or use of racially biased algorithms, [such as Management Mosaic,] for example, is a deceptive practice banned by the FTC Act.”).
[58] Id. (meaning that the third-party seller of the AI to the investment firm also has an obligation to be truthful in its claims of non-biased results).
[60] The onus is placed on the consumer and not solely on the vendor because of AI’s ability to change over time. Additionally, consumers may tinker with the program.
[61] See Exec. Order No. 13,859, 84 Fed. Reg. 3967 (February 11, 2019).
[62] Memorandum from Acting Director Russell T. Vought to the Heads of Executive Department and Agencies (Jan. 7, 2019).
[65] See Gary T. Schwartz, Tort Law and the Economy in Nineteenth-Century America: A Reinterpretation, 90 Yale L.J. 1717, 1717‑18 (1981) (quoting Lawrence M. Friedman, A History of American Law 417 (1st ed. 1973)) (“[T]he thrust of the rules, taken as a whole, approached the position that corporate enterprise would be flatly immune from actions sounding in tort.”).
[66] See U.S.-EU Trade and Technology Council Inaugural Joint Statement (Sept. 29, 2021), https://www.whitehouse.gov/briefing-room/statements-releases/2021/09/29/u-s-eu-trade-and-technology-council-inaugural-joint-statement/.
[68] See, e.g, S.B. 1943, 2020 Leg., 219th Sess. (N.J. 2020) ( “A person, bank, banking organization, credit reporting agency, mortgage company, or other financial institution, lender or credit institution involved in the making or purchasing of any loan or extension of credit shall not discriminate through the use of an automated decision system against any person or group of persons who is a member of a protected class.”); see also Colo. Rev. Stat. Ann. § 10-3-1104.9 (West, Westlaw through legislation effective Feb. 24, 2022 of the 2d Reg. Sess., 73rd Gen. Assemb.) (Mandating that those insurers cannot “use any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources, in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.”).
[69] S.B. 5116, 2021 Leg., 67th Sess. (Wash. 2021).
[70] See A.B. 13, 2021 Leg., Reg. Sess. (Cal. 2021).
[72] Id. (Specifically the developer must describe “any potential disparate impacts on the basis of characteristics identified in the Unruh Civil Rights Act (Section 51 of the Civil Code) from the proposed use of the automated decision system, including, but not limited to, reasonably foreseeable capabilities outside the scope of its proposed use.”).
[73] See generally Cal. Civ. Code § 51 (West, Westlaw through Ch. 10 of 2022 Reg. Sess.) (explaining what constitutes a protected class in California).
[74] See Cal. A.B. 13 (“The use of the automated decision system is likely to pose a material risk of harm from the use of the personal information of a significant number of individuals with regard to race, color, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal record”).
[75] See generally Proposal for a Regulation of the Eur. Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21, 2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206 [hereinafter E.U. Proposal about Artificial Intelligence Rules and Amendments].
[76] Mark McCarthy & Kenneth Propp, Machines learn that Brussels writes the rules: The EU’s new AI regulation, Brookings (May 4, 2021), https://www.brookings.edu/blog/techtank/2021/05/04/machines-learn-that-brussels-writes-the-rules-the-eus-new-ai-regulation/.
[77] Regulatory framework proposal on artificial intelligence, Eur. Comm’n (Feb. 28, 2022), https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai [hereinafter Eur. Comm’n].
[78] E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75.
[79] See also E.U. Charter of Fundamental Rights art. 21, https://fra.europa.eu/en/eu-charter/article/21-non-discrimination [hereinafter E.U. Charter art. 21].
[80] See E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75 (“this regulation applies to . . . providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”); see also McCarthy & Propp, supra note 76 (noting that “‘effects’ test [created by the EU legislation] potentially extends the law’s reach to companies without a market presence in the EU that use AI systems to process data about EU citizens.”).
[81] See generally E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75 (noting that upon being issued notice of failure to comply with the regulation either the EAIB or the monitoring member state can order that the company take corrective action, and order it to withdraw from the European Market).
[82] Nuala O’Connor, Reforming the U.S. Approach to Data Protection and Privacy, Council on Foreign Relations (Jan. 30, 2018), https://www.cfr.org/report/reforming-us-approach-data-protection.
[83] See id. (noting that the federal failure to adopt a federal technology law, data privacy and protection law similar to the GDPR, is causing U.S. companies and citizens to suffer, as they have to meet and comply with often contradictory state regulatory standards).
[84] See Roberta S. Karmel & Claire R. Kelly, The Hardening of Soft Law in Securities Regulation, 34 Brook. J. Int’l L. 883, 885 (2009) (noting that soft law has been in used securities law, primarily by the SEC because of “the need for speed, flexibility, and expertise in dealing with fast-breaking developments in capital markets”).
[85] Gary Marchant, “Soft Law” Governance Of Artificial Intelligence, AI Pulse (Jan. 25, 2019), https://aipulse.org/soft-law-governance-of-artificial-intelligence/.
[87] See id.(“The pace of development of AI far exceeds the capability of any traditional regulatory system to keep up, a challenge known as the ‘pacing problem’ that affects many emerging technologies.”); see also Gary E. Marchant et al., The Growing Gap Between Emerging Technologies and Legal-Ethics Oversight 19 (Gary E. Marchant et al. eds., 2011) (“Increasingly, the traditional legal tools of notice-and-comment rulemaking, legislation and judicial review are being left behind by emerging technologies, struggling to cope with even yesterday’s technologies. The consequence of this growing gap between the pace of technology and law is increasingly outdated and ineffective legal structures, institutions and processes to regulate emerging technologies.”).
[88] See Bianca Datta, Can Government Keep Up with Artificial Intelligence?, PBS (Aug. 10, 2017), https://www.pbs.org/wgbh/nova/article/ai-government-policy/ (“‘There is no possible way to have some omnibus AI law,’ says Ryan Calo, a professor of law and co-director of the Tech Policy Lab at the University of Washington. ‘But rather we want to look at the ways in which human experience is being reshaped and start to ask what law and policy assumptions are broken.’”).
[89] Marchant, supra note 85, at 19(“The consequence of this growing gap between the pace of technology and law is increasingly outdated and ineffective legal structures, institutions and processes to regulate emerging technologies.”); see also Michael Littman et. al., Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 71 (2021), https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-study (“[G]overnment institutions are still behind the curve, and sustained investment of time and resources will be needed to meet the challenges posed by rapidly evolving technology.”).
[90] Letter from Maxine Waters, Chairwoman, U.S. House Committee on Financial Services, to the Honorable Jerome H. Powell et al., (Nov. 29, 2021), https://financialservices.house.gov/uploadedfiles/11.29_ai_ffiec_ltr_cmw_foster.pdf (“This Congress, we re-established the AI Task Force so that it can continue its investigation on whether emerging technologies such as AI are serving the needs of consumers, investors, small businesses, and the American public, which is needed especially as we seek build back better after the COVID19 pandemic. Our first AI hearing this Congress focused on the use of AI and Machine Learning, and explored how Human-Centered AI can build equitable algorithms and address systemic racism and in Housing and Financial Services.”).
[91] Laurin Weissinger, AI, Complexity, and Regulation, OUP Handbook on AI Governance (forthcoming), https://ssrn.com/abstract=3943968 (“Regulating and governing AI will remain a challenge due to the inherent intricacy of how AI is deployed and used in practice. Regulation effectiveness and efficiency is inversely proportional to system complexity and the clarity of objectives: the more complicated an area is and the harder objectives are to operationalize, the more difficult it is to regulate and govern.”); see also, Matthew Weaver, UK public faces mass invasion of privacy as big data and surveillance merge, The Guardian (Mar. 14, 2017), https://www.theguardian.com/uk-news/2017/mar/14/public-faces-mass-invasion-of-privacy-as-big-data-and-surveillance-merge (noting that the then UK’s Surveillance Camera Commissioner was struggling to keep up with the pace and size technological changes of big data as there was no regulatory policy that could address it).
[94] See Marchant, supra note 85, at 4 (“Soft law instruments can be adopted and revised relatively quickly, without having to go through the traditional bureaucratic rulemaking process of government.”).
[95] See Weissinger, supra note 91 (“AI complexity is not only technical but also social and organizational: what we call AI or AI systems are not, at least in practice, singular device or program but socio-technical networks of subsystems. Per se, this is not uncommon in contemporary value chains (Weissinger 2020) but in the case of AI and computing, an individual subsystem is already uncertain and carries the risk of emergent and unexpected behavior.”).
[96] Steve Lohr, Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring, N.Y. Times (Dec. 8, 2021), https://www.nytimes.com/2021/12/08/technology/data-trust-alliance-ai-hiring-bias.html.
[100] See generally The Investopedia Team, Environmental, Social, and Governance (ESG) Criteria,Investopedia (Feb. 23, 2022), https://www.investopedia.com/terms/e/environmental-social-and-governance-esg-criteria.asp.
[101] See generally Colleen Kane, By popular demand: Companies that changed their ways,CNBC (Apr. 28, 2015), https://www.cnbc.com/2015/04/27/by-popular-demand-companies-that-changed-their-ways.html.
[102] See Arielle Pardes, Yet Another Year of Venture Capital Being Really White, WIRED (Dec. 29, 2020), https://www.wired.com/story/venture-capital-2020-still-really-white/ (“The issues with diversity in venture capital are not new, but the problems rarely received the sustained public attention that came this year . . . . Some prominent venture firms set up special funds this year to get more money into the hands of founders from underrepresented groups.”).
[103] See generally E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75.
[104] See Eur. Comm’n, supra note 77 (providing a detailed explanation of the legislation’s risk tier scheme).
[105] See generally E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75.
[106] See E.U. Charter art. 21, supra note 71.
[107] See generally E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75.
[108] See, e.g., Vlad Andrei, Will Bitcoin Ever Be Regulated?, Albaron Ventures (Oct. 29, 2019), https://albaronventures.com/will-bitcoin-ever-be-regulated/ (noting that absent clear congressional direction regarding agency authority over cryptocurrency, another emerging technology, creating “overarching regulatory guidelines” is proving difficult).
[109] See generally Title VII of the Civil Rights Act of 1964; see also Protections Against Discrimination and Other Prohibited Practices, FTC, https://www.ftc.gov/policy-notices/no-fear-act/protections-against-discrimination (last visited Mar. 26, 2022).
[110] See generally Federal Sector Employment Discrimination Complaint Process within the FTC, FTC, https://www.ftc.gov/policy-notices/no-fear-act/complaint-process#remedies (last visited Mar. 26, 2022) (showing that if a potential startup founder was denied funding due to an AI determination on the basis of her being a member of a protected class she would have the right to (1) launch a complaint against the VC fund for injunctive relief, and monetary damages, and (2) require that the VC fund show concrete reasons for denying funding absent race or gender or proxies factors for them. Failure to show concrete reasons for denying funding, would trigger an audit by, i.e., the FTC to determine whether the AI is complaint with the proposed legislation).
[111] See, e.g., 26 U.S.C. § 7608 (explaining that the IRS has broad authority to audit a company’s records to ensure that the company is compliant with the law; similarly, the proposed legislation would allow the controlling agency to audit the AI to ensure compliance).
[112] A plausible solution would extend the Title VII cause of action for Disparate Treatment to programmers and VC funds utilizing biased AI.
[113] See E.U. Proposal about Artificial Intelligence Rules and Amendments, supra note 75.