Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."

Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."
r/Superstonk - Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."

https://www.sec.gov/news/statement/peirce-statement-predictive-data-analytics-072623

Highlights:

The best thing I can say for this proposal is that it serves, perhaps unintentionally, as a mirror reflecting the Commission’s distorted thinking. In that mirror, you will see the Commission’s attitude toward technology, which is not neutral, but hostile. It reflects this Commission’s loss of faith in one of the pillars of our regulatory infrastructure: the power of disclosure and the corresponding belief that informed investors are able to think for themselves. Another glance through the looking glass will reveal the Commission's continued degradation of a principles-based regulatory regime, replacing it once again with overly prescriptive rules. And a final look reveals the Commission’s indifference to operational feasibility. I dissent from this proposal and the thinking it embodies.
  • Hester claims the proposal's broad language could potentially cover common tools, like spreadsheets, and subjects certain technologies to severe scrutiny, effectively 'banning' them.
  • Believes it is a bad thing to assume investors can't resist being psychologically manipulated by technology...

Proposed Rule:

r/Superstonk - Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."

Fact Sheet:

r/Superstonk - Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."
r/Superstonk - Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."

TLDRS:

  • Hester is big mad about the SEC's proposed new rules and amendments to address certain conflicts of interest associated with the use of predictive data analytics by broker-dealers and investment advisers in investor interactions.
  • "I dissent from this proposal and the thinking it embodies."
r/Superstonk - Commissioner Hester M. Peirce on the SEC's conflict of Interest when using predictive data Analytics by Broker-Dealers and Investment Advisers Proposal: "I dissent from this proposal and the thinking it embodies."

Hester's Full Speech:

Thank you, Chair Gensler. The best thing I can say for this proposal is that it serves, perhaps unintentionally, as a mirror reflecting the Commission’s distorted thinking. In that mirror, you will see the Commission’s attitude toward technology, which is not neutral, but hostile. It reflects this Commission’s loss of faith in one of the pillars of our regulatory infrastructure: the power of disclosure and the corresponding belief that informed investors are able to think for themselves. Another glance through the looking glass will reveal the Commission's continued degradation of a principles-based regulatory regime, replacing it once again with overly prescriptive rules. And a final look reveals the Commission’s indifference to operational feasibility. I dissent from this proposal and the thinking it embodies.

Despite protestations that “[t]he proposal is intended to be technology neutral” and does “not seek[] to identify which technologies a firm should or should not use,”[i] the proposal reflects a hostility toward technology. That antagonism is trained at predictive data analytics (or “PDA”) technologies, “such as [artificial intelligence (AI)], machine learning, or deep learning algorithms, neural networks, [natural language processing (NLP)], or large language models (including generative pre-trained transformers), as well as other technologies that make use of historical or real-time data, lookup tables, or correlation matrices among others.”[ii] To get at those technologies, the rule would define a “covered technology” as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes of an investor.”[iii] Given that broad language, spreadsheets,[iv] commonly used software, math formulas, statistical tools, and AI trained on all manner of datasets,[v] could fall within the ambit of this rulemaking. Once in this category, a technology would be subject to an intense review for conflicts of interest as specially defined for this rule,[vi] which would then have to be eliminated or neutralized. Requiring firms to subject certain types of technologies to a uniquely onerous review and conflict remediation process is not technology neutral. Let us be honest about what we are doing here: banning technologies we do not like. As the release admits, one consequence of this initiative is that “a firm might opt not to use an automated investment advice technology because of the costs associated with complying with the proposed rules.”[vii] We risk depriving investors of the benefits of technological advancement.

But this release does more than single out particular technologies for regulatory hazing, it also rejects one of our primary regulatory tools—disclosure. If a firm determines that the use (or potential use) of a covered technology involves a conflict of interest, then the firm has to eliminate or neutralize the conflict. Disclosure is not an option. In many ways, the discussion surrounding the inadequacy of disclosure is the most troubling aspect of the proposal. The long-term ramifications of the Commission’s rationale for dismissing the value of disclosure —namely, that disclosure is of no use to investors[viii] -- cannot be exaggerated. The release explains that disclosure cannot work since investors are powerless pawns incapable of resisting psychological manipulation by technologies designed to play to their “proclivities.”[ix] The release even hints that certain investors might be particularly vulnerable because of their sex, age, how educated their parents are, and even height.[x] Is the next step going to be to make investment decisions for investors we deem incapable of making their own decisions? The whole premise of our disclosure regime is that investors can think for themselves.

In addition to rejecting disclosure, this proposal continues the Commission’s layering on of obligations. While some covered technologies may create unique challenges,[xi] advisers are bound by their obligations as fiduciaries, and broker-dealers are bound by Regulation Best Interest and FINRA rules.[xii] The Commission describes this proposed rulemaking as a “supplement . . . to existing regulatory obligations related to conflicts.”[xiii] As we make clear in the release, “[b]roker-dealers and investment advisers are currently subject to extensive obligations under Federal securities laws and regulations . . . that are designed to promote conduct that, among other things, protects investors . . . from conflicts of interest.”[xiv] Under these overarching standards, firms using covered technologies have to identify and mitigate conflicts of interest. We already have the ability to pursue bad actors. We should be considering issuing guidance or conducting a roundtable to discuss topics such as adaptive AI, but we do not need standalone rules. Today’s proposal joins a growing list of Commission rulemakings that are unnecessary.[xv]

Given that this rule is designed merely to supplement other rules, the Commission’s utter disregard for operational feasibility is inexplicable. For any covered technology, broker-dealers and investment advisers will have to conduct a conflict identification process that is itself bereft of discernible borders. Eye-wateringly detailed written policies and procedures cover every aspect of the evaluation and assessment of potential conflicts and how to handle them. The whole process would be capped off by a “review and written documentation of that review . . . of the adequacy of the [firm’s] policies and procedures” that would have to be conducted at least annually.[xvi] In a Through-the-Looking-Glass kind of way, we present these proposed obligations as principles-based, but that characterization melts against the description of our expectations. When establishing their evaluation methods, firms “may adopt an approach that is appropriate for [their] particular use of covered technology, provided”—there always seems to be a “provided” or a “however”—provided that the firm identifies conflicts “associated with how the technology operated in the past . . . and how it could operate once deployed,” as well as, in most instances, “other scenarios that are reasonably foreseeable.”[xvii]

The release offers a break for “a firm that only uses simpler covered technologies in investor interactions, such as basic financial models contained in spreadsheets or simple investment algorithms.” Such a firm “could take simpler steps to evaluate the technology and identify any conflicts of interest.”[xviii] Firms thinking of using “more advanced covered technologies” might have to “build ‘explainability’ features into the technology,” to describe why the program reached “a particular outcome, recommendation, or prediction.”[xix] If explainability features are not available, a firm might have to forgo using the technology or modify it to include explainability features and back-end controls.[xx] What firm, large or small, would feel confident that it has a handle on what to expect when Examinations or Enforcement comes knocking?[xxi] Will any but the largest firms have the personnel and resources needed to comply with the proposed evaluation and testing standards? Small firms will have to abandon worthwhile technologies that benefit investors and firms.[xxii] Get out your abacuses, I guess.

I hope that just as Alice did, we will wake up from this dream and find ourselves back on the other side of the looking glass. In the meantime, however, I am eager to hear what commenters say about the proposal and to see our reevaluation of the rule in light of those comments. As is always the case, however, my inability to vote in favor of a rulemaking should not be taken as a reflection of my views of the Commission staff. I maintain a deep appreciation for how hard they work under vexing conditions. Deadlines are more demanding than ever, and the marching orders even more challenging to implement, but the staff’s dedication and talent continue to shine. A special shoutout to Sirimal Mukerjee and Blair Burnett. I do have some questions:

The definition of “covered technology” is quite broad. Do you mean to encompass Excel spreadsheets, for example, and mathematical formulas used to price securities?

The rule claims to be technology neutral—and maybe it is because the definition of “covered technology” is so broad—tell me how I am wrong to think that we are creating an especially harsh rule for particular types of technology.

The release suggests that a non-disclosure approach is warranted here saying “due to the scalability of these technologies and the potential for firms to reach a broad audience at a rapid speed … any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible.”[xxiii] Other technologies have likewise facilitated firms’ rapid expansion. Why are covered technologies being singled out?

The release posits a situation in which “one conflicted factor among thousands in the algorithm or data set upon which a technology [causes] the covered technology to produce a result that places the interests of the firm ahead of the interests of investors, and the effect of considering that factor may not be immediately apparent without testing.”[xxiv] How could a firm get comfortable that it had done enough testing to spot that one conflicted factor in an algorithm or data set?

The release seems to reject disclosure as an ineffective tool given people’s inability to resist technological prompts designed to play into their unique psychological make-up. What are limits to the argument that people, when faced with bespoke technological prompts, cannot think for themselves? In what other areas will regulation have to change to accommodate people’s inability to withstand technological nudges?

Given the application of this rule to investor interactions, rather than merely recommendations, do we have the authority to apply it to broker-dealers? Is it a backdoor attempt to expand Regulation Best Interest?

The economic analysis says that the proposed rules “could … act as barriers to entry or create economies of scale, potentially making it challenging for smaller firms to compete.” Why isn’t that “could” a “would”? It seems inevitable that a rule like this will prevent small firms from using technology that would enable them to serve their clients and compete with larger rivals.

What length would the compliance period be for the rule if it were to be adopted?

The release includes a helpful table identifying direct costs of the proposed rules. The estimate for firms with simple covered technology is 25 hours initially and 12.5 thereafter and for firms with complex covered technology is 350 hours annually and 175 thereafter. I found it hard to reconcile those estimates with the complexity of the processes the release describes. Can you provide me a window into how you arrived at those numbers by describing what a simple covered technology firm would look like and what it would have to do upon adoption of the rule as proposed?

The rule appears assume that AI is so complex it needs special rules. Aren’t humans even more complex?

Reddit Post