Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

A Terrible Blast From the Recent Past and A Terrifying Glimpse Into the Future

 

Mr. Emory’s entire life’s work is avail­able on a 64GB flash dri­ve, avail­able for a con­tri­bu­tion of $65.00 or more (to KFJC). Click Here to obtain Dav­e’s 40+ years’ work, com­plete through ear­ly 2025 (The pre­vi­ous ver­sion was through FTR #1215, almost two years ago).   NB: The new flash dri­ve will con­tain all the new con­ver­sa­tions with Monte, as well as all of Dav­e’s work on the ori­gins of the coro­n­avirus.

WFMU-FM is  pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

EVERYTHING MR. EMORY HAS BEEN SAYING ABOUT THE UKRAINE WAR IS ENCAPSULATED IN THIS VIDEO FROM UKRAINE 24

ANOTHER REVEALING VIDEO FROM UKRAINE 24

Dr. Jef­frey Sachs “pret­ty con­vinced” Covid came from a U.S. Bio-Lab.

“Polit­i­cal language…is designed to make lies sound truth­ful and mur­der respectable, and to give an appear­ance of solid­i­ty to pure wind.”

— George Orwell, 1946

KNSJ (89.1 FM) is now air­ing “For The Record”

Dave Emory has launched a Patre­on Site: www.patreon.com/DaveEmory

COMMENT: We have spo­ken at great length about Covid-19 as a bio­log­i­cal war­fare weapon. The over­lap­ping, mas­sive series “Bio psy-Op Apoc­a­lypse Now,” ‘The Oswald Insti­tute of Biol­o­gy” and “Pan­demics, Inc” encom­pass the pre­sen­ta­tion of the mate­r­i­al.

We have spo­ken of Oper­a­tion Warp Speed and the mil­i­tary’s dom­i­na­tion of same

Now, we learn that the devel­op­ment of the vac­cines was real­ized through Palan­tir, Peter Thiel’s Alpha preda­tor of the elec­tron­ic sur­veil­lance land­scape.

Fur­ther­more, we are now see­ing calls to turn our whole soci­ety and gov­ern­ment over to Arti­fi­cial Intel­li­gence.

A ter­ri­fy­ing pre­view of what AI may bring is avail­able in L‑2.

“Let AI remake the whole U.S. gov­ern­ment (oh, and save the coun­try)” By Josh Tyrang­iel; Wash­ing­ton Post; 03/06/2024.

My awak­en­ing began in the mod­ern fash­ion — late at night, on YouTube. Months lat­er the video still has just 3,900 views, so I’d bet­ter describe it. A few dozen peo­ple have gath­ered to watch a pre­sen­ta­tion. It’s capa­bly pro­duced — like a mid­size col­lege professor’s audi­tion for a TED Talk. The pre­sen­ter, in a pat­terned blaz­er and blue oxford, is retired four-star gen­er­al Gus­tave Per­na. “I spent 40 years in the Army,” Per­na begins, the hard edges of his New Jer­sey accent clang­ing a lit­tle in the room. “I was an aver­age infantry offi­cer. I was a great logis­ti­cian.”

It’s a leisure­ly start. And yet the clos­est com­par­i­son I have for what comes next is Star Wars. Because once he gets through his slow-crawl pro­logue, Per­na tells a sto­ry so tense and futur­is­tic that, by the end, it’s pos­si­ble to glimpse a com­plete­ly dif­fer­ent way in which we might live as cit­i­zens.

Also, there’s warp speed. Per­haps Perna’s name sounds famil­iar. It should. He over­saw the effort to pro­duce and dis­trib­ute the first coro­n­avirus vac­cines — a recent tri­umph of U.S. pol­i­cy that’s been erased by the stu­pid­i­ty of U.S. pol­i­tics. Per­na was a month from retire­ment in May 2020 when he got a Sat­ur­day morn­ing call from the chair­man of the Joint Chiefs. Arriv­ing in Wash­ing­ton two days lat­er to begin Oper­a­tion Warp Speed, his arse­nal con­sist­ed of three colonels, no mon­ey and no plan. The audi­ence is focus­ing now.

Per­na tells them that what he need­ed more than any­thing was “to see myself.” On the bat­tle­field this means know­ing your troops, posi­tions and sup­plies. It means rough­ly the same thing here, except the bat­tle­field is bound­ary­less. Per­na need­ed up-to-the-minute data from all the rel­e­vant state and fed­er­al agen­cies, drug com­pa­nies, hos­pi­tals, phar­ma­cies, man­u­fac­tur­ers, truck­ers, dry ice mak­ers, etc. Oh, and that data need­ed to be stan­dard­ized and oper­a­tional­ized for swift deci­sion-mak­ing. … To see him­self, Per­na need­ed a real-time dig­i­tal dash­board of an entire civ­i­liza­tion.

This being Wash­ing­ton, con­sul­tants lined up at his door. Per­na gave each an hour, but none could define the prob­lem let alone offer a cred­i­ble solu­tion. “Excru­ci­at­ing,” Per­na tells the room, and here the Jer­sey accent helps dri­ve home his dis­gust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, deter­mine risk, and make deci­sions rapid­ly.” Per­na shut down the process imme­di­ate­ly. “I said great, you’re hired.”

Julie and Aaron work for Palan­tir, a com­pa­ny whose name cur­dles the blood of pro­gres­sives and some of the mil­i­tary estab­lish­ment. We’ll get to why. But Per­na says Palan­tir did exact­ly what it promised. Using arti­fi­cial intel­li­gence, the com­pa­ny opti­mized thou­sands of data streams and piped them into an ele­gant inter­face. In a few short weeks, Per­na had his God view of the prob­lem.

A few months after that, Oper­a­tion Warp Speed deliv­ered vac­cines simul­ta­ne­ous­ly to all 50 states. When gov­er­nors called pan­ick­ing that they’d some­how been short­ed, Per­na could share a screen with the pre­cise num­ber of vials in their pos­ses­sion. “‘Oh, no, gen­er­al, that’s not true.’ Oh, yes. It is.”

… When Joe Biden deliv­ers his State of the Union on March 7, he’ll like­ly become the first pres­i­dent to use the phrase arti­fi­cial intel­li­gence in the address. The pres­i­dent has been good on AI. His exec­u­tive order on the “Safe, Secure, and Trust­wor­thy Devel­op­ment and Use of Arti­fi­cial Intel­li­gence” threw a switch acti­vat­ing the fed­er­al bureaucracy’s engage­ment.

He’s del­e­gat­ing to smart peo­ple and bang­ing the drum about gen­er­a­tive AI’s abil­i­ty to cre­ate mis­in­for­ma­tion and harm nation­al secu­ri­ty. That’s plen­ty for a speech. But the vision remains so small com­pared with the pos­si­bil­i­ties. This is tech­nol­o­gy that could trans­form almost every­thing about our soci­ety, yet nei­ther the pres­i­dent nor his polit­i­cal rivals have imag­ined how it might do the same for the gov­ern­ment itself. So allow me.

Accord­ing to a 2023 year end Gallup poll, Amer­i­cans’ con­fi­dence in 15 insti­tu­tions — cov­er­ing things such as health care, edu­ca­tion and reg­u­la­tion — is at his­toric lows. The poll’s con­clu­sion is that gov­ern­ment is suf­fer­ing an acute cri­sis of legit­i­ma­cy. We no longer trust it to fix impor­tant things in our lives. If con­fi­dence in the effec­tive­ness of gov­ern­ment keeps erod­ing at this pace, how much longer do you think we can remain unit­ed? How easy do we want to make our dis­man­tling for the nihilists already cheer­ing it on?

Prop­er­ly deployed, AI can help blaze a new path to the shin­ing city on a hill. In 2023, the nation­al tax­pay­er advo­cate report­ed that the IRS answered only 29 per­cent of its phone calls dur­ing tax sea­son. Human-based eli­gi­bil­i­ty deci­sions for the Sup­ple­men­tal Nutri­tion Assis­tance Pro­gram, have a 44 per­cent error rate. Large-lan­guage-mod­el-pow­ered chat­bots could already be pro­vid­ing bet­ter ser­vice — at all hours, in all lan­guages, at less cost — for peo­ple who rely on the fed­er­al gov­ern­ment for vet­er­ans ben­e­fits, stu­dent loans, unem­ploy­ment, social secu­ri­ty and Medicare. That’s table stakes.

Now think about Warp Speed­ing entire agen­cies and func­tions: the IRS, which, in 2024, still makes you guess how much you owe it, pub­lic health sur­veil­lance and response, traf­fic man­age­ment, main­te­nance of inter­states and bridges, dis­as­ter pre­pared­ness and relief. AI can rev­o­lu­tion­ize the rela­tion­ship between cit­i­zens and the gov­ern­ment. We have the tech­nol­o­gy. We’ve already used it. Men­tion Oper­a­tion Warp Speed to skep­tics and they’ll wave you off. It doesn’t count. In a cri­sis the great sloth of gov­ern­ment can sprint, but in reg­u­lar times pro­cure­ment rules, agency reg­u­la­tors and the end­less nit­pick­ing of pol­i­tics make big things impos­si­ble. All true.

There’s anoth­er strain of skep­ti­cism that goes like this: Are you insane? AI might cre­ate all kinds of effi­cien­cy, but it’s also been known to have sys­temic bias­es that could get encod­ed into offi­cial gov­ern­ment sys­tems, lack trans­paren­cy that could under­mine pub­lic trust, make loads of fed­er­al jobs obso­lete, and be vul­ner­a­ble to data breach­es that com­pro­mise pri­va­cy and sen­si­tive infor­ma­tion. If AI were a Big Phar­ma prod­uct the ads would be 10 min­utes long.

We can put guardrails around how the gov­ern­ment uses AI — anonymiz­ing per­son­al data as they do in the Euro­pean Union, cre­at­ing over­sight bod­ies for con­tin­u­ous mon­i­tor­ing — but I’m not naive. Some things will still go wrong. Which leaves us to weigh the risks of the cure against the dead­li­ness of the dis­ease. To check my premise, I set up a Zoom call with Per­na. He was in sweats at his home in Alaba­ma, and if he missed car­ry­ing the weight of the world he did a great job hid­ing it. He con­sults a lit­tle for Palan­tir now, but most­ly he was excit­ed to talk about grand­kids, the Yan­kees and the best New York City slice joints.

His mood shift­ed when I asked what gov­ern­ment could improve if it embraced AI. “Every­thing,” he snapped, before the ques­tion was ful­ly out. “I don’t under­stand how we’re not using it for organ dona­tion right now. We should be ashamed. Why do we need 80,000 new peo­ple at the IRS? We could rev­o­lu­tion­ize the bud­get process. I tell Palan­tir, why are you play­ing around with the Depart­ment of Defense? Think big­ger.”

… Imag­ine all of an organization’s data sources as a series of gar­den hoses in your back­yard. Let’s say the orga­ni­za­tion is a hos­pi­tal. There are hoses for per­son­nel, equip­ment, drugs, insur­ance com­pa­nies, med­ical sup­plies, sched­ul­ing, bed avail­abil­i­ty and prob­a­bly dozens of oth­er things. Many of the hoses con­nect up to ven­dors and many con­nect to patients. No one can remem­ber what some of them are sup­posed to con­nect to. All were bought at dif­fer­ent times from dif­fer­ent man­u­fac­tur­ers and are dif­fer­ent sizes and lengths. And it’s a hos­pi­tal, so hose main­te­nance has nev­er been anyone’s top pri­or­i­ty. Now look out the win­dow. There’s a pile of knot­ted rub­ber so dense you can’t see grass. Palan­tir untan­gles hoses.

“We’ve always been the mole peo­ple of Sil­i­con Val­ley,” says Akshay Krish­naswamy, Palantir’s chief archi­tect. “It’s like we go into the plumb­ing of all this stuff and come out and say, ‘Let’s help you build a beau­ti­ful ontol­ogy.’” In meta­physics, ontol­ogy is the study of being. In soft­ware and AI, it’s come to mean the untan­gling of mess­es and the cre­ation of a func­tion­al infor­ma­tion ecosys­tem. Once Palan­tir stan­dard­izes an organization’s data and defines the rela­tion­ships between the streams, it can build an appli­ca­tion or inter­face on top of it.

This com­bi­na­tion — inte­grat­ed data and a use­ful app — is what allows every­one from mid­dle man­agers to four-star gen­er­als to have an AI co-pilot, to see them­selves with the God view. “It’s the Iron Man suit for the per­son who’s using it,” says Krish­naswamy. “It’s like, they’re still going to have to make deci­sions but they feel like they’re now fly­ing around at Mach 5.”

The most dra­mat­ic expres­sion of Palantir’s capa­bil­i­ties is in Ukraine, where the com­pa­ny merges real-time views from hun­dreds of com­mer­cial satel­lites with com­mu­ni­ca­tions tech­nol­o­gy and weapons data. All of that infor­ma­tion is then seam­less­ly dis­played on lap­tops and hand­held dash­boards for com­man­ders on the bat­tle­field.

A senior U.S. mil­i­tary offi­cial told me, “The Ukrain­ian force is incred­i­bly tough, but it’s not much of a fight with­out iPads and Palan­tir.” I men­tioned that pro­gres­sives and some of the mil­i­tary estab­lish­ment dis­like Palan­tir. Each has a rea­son. The com­pa­ny was co-found­ed in 2003 by Peter Thiel, which explains much of the hatred from the far left.

Thiel spoke at the 2016 Repub­li­can con­ven­tion, endorsed Don­ald Trump in 2016, dis­likes mul­ti­cul­tur­al­ism, financed a law­suit to kill Gawk­er and then ttried to buy its corpse. The enmi­ty here is mutu­al, but also kind of triv­ial. Palan­tir has anoth­er co-founder. His name is Alex Karp, and many peo­ple in the Pen­ta­gon find him very annoy­ing. The quick expla­na­tion is that Karp is loud and impa­tient, and he’s not one of them. But it’s more trou­bling than that. Karp was born in New York City to a Black moth­er and a Jew­ish father.

He’s severe­ly dyslex­ic, a social­ist, a 2016 Hillary Clin­ton sup­port­er. When we spoke in Palantir’s New York offices, it was clear that he’s both whip-smart and keeps a care­ful account­ing of the slights he’s accu­mu­lat­ed. “Quite frankly,” Karp told me, “just because of bio­graph­i­cal issues, I assume I am going to be screwed, right?” It was like meet­ing the pro­tag­o­nist from a book co-authored by Ralph Elli­son and Philip Roth. Thiel and Karp were law school class­mates at Stan­ford in the ear­ly ’90s.

They argued plen­ty, but agreed about enough to cre­ate Palan­tir with par­tial fund­ing (less than $2 mil­lion) from In-Q-Tel, an invest­ment arm of the CIA, and a few core beliefs. The first is that the Unit­ed States is excep­tion­al, and work­ing to strength­en its posi­tion in the world ben­e­fits all human­i­ty. “I’ve lived abroad,” Karp says. “I know [Amer­i­ca] is the only coun­try that’s remote­ly as fair and mer­i­to­crat­ic as Amer­i­ca is. And I tend to be more focused on that than the obvi­ous short­com­ings.”

In a speech last year, Karp, who is CEO, explained what this means for the com­pa­ny: “If you don’t think the U.S. gov­ern­ment should have the best soft­ware in the world … We respect­ful­ly ask you not to join Palan­tir. Not in like you’re an idiot, just we have this belief struc­ture.” The company’s sec­ond core belief springs from the chip on Karp’s shoul­der. Like gen­er­a­tions of Black and Jew­ish entre­pre­neurs before him, Karp pre­sumes his com­pa­ny isn’t going to win any deals on the golf course. So to get con­tracts from For­tune 500 com­pa­nies and gov­ern­ments Palan­tir must do things oth­er soft­ware com­pa­nies won’t, and do them so fast and cheap that the results are irrefutable.

This approach has worked exceed­ing­ly well in the cor­po­rate world. Palantir’s mar­ket cap­i­tal­iza­tion is $52 bil­lion and its stock has climbed more than 150 per­cent in the past year, large­ly because of demand for its AI prod­ucts. But for much of its exis­tence, an open­ly patri­ot­ic com­pa­ny with soft­ware bet­ter, faster and cheap­er than its com­peti­tors was shut out of U.S. defense con­tracts. In the mid-2010s this put Palantir’s sur­vival at risk and sharp­ened Karp’s indig­na­tion to a fine point. Either his biog­ra­phy had made him para­noid or some­thing was amiss. In 2016, Palan­tir took the unprece­dent­ed step of suing the Pen­ta­gon to find out.

The case alleged the Defense Depart­ment was in vio­la­tion of the Fed­er­al Acqui­si­tion Stream­lin­ing Act, a 1994 law that pro­hibits the gov­ern­ment from start­ing new bloat-filled projects if an off-the-shelf solu­tion is avail­able. The House Com­mit­tee on Gov­ern­ment Oper­a­tions made its intent unusu­al­ly clear: “The Fed­er­al Gov­ern­ment must stop ‘rein­vent­ing the wheel’ and learn to depend on the wide array of prod­ucts and ser­vices sold to the gen­er­al pub­lic.”

The record of Palan­tir v. Unit­ed States is about as one-sided as these things can be. In the Court of Fed­er­al Claims, Palan­tir was able to doc­u­ment sol­diers, offi­cers and pro­cure­ment peo­ple acknowl­edg­ing the suprema­cy and low­er cost of its in-mar­ket prod­ucts — and show the Pen­ta­gon was still buy­ing a more expen­sive pro­pos­al, years from effec­tive deploy­ment, offered by a con­sor­tium of Raytheon, Northrop Grum­man and Lock­heed Mar­tin.

The Army’s defense can be sum­ma­rized as, “Yeah, well that’s kin­da how we do stuff.” Palantir’s lawyers respond­ed with insults about struc­tur­al iner­tia, backed with receipts. Boies, Schiller & Flexn­er had them­selves a time. Palantir’s vic­to­ry was resound­ing, and opened the door to what is now a more func­tion­al rela­tion­ship. Wednes­day, the Army announced that Palan­tir won a $178 mil­lion con­tract to make 10 pro­to­types for the next phase of its tac­ti­cal intel­li­gence tar­get­ing node (Titan) pro­gram. Titan is a ground sta­tion that uses sen­sor data from space, sky and land to improve long-range weapons pre­ci­sion.

Still, Karp insists rivals reg­u­lar­ly win con­tracts with video pre­sen­ta­tions of unbuilt solu­tions over exist­ing soft­ware from Palan­tir. Sev­er­al peo­ple I spoke with in the Defense Depart­ment vol­un­teered that Palantir’s soft­ware is excel­lent — and a few said they’d be hap­py if the com­pa­ny would go away. It chal­lenges too many things about the pro­cure­ment cul­ture and process. One not­ed that Palantir’s D.C. office is in George­town near (gasp) a Lul­ule­mon as opposed to in the tra­di­tion­al val­ley of con­trac­tors adja­cent to the Pen­ta­gon. … Palantir’s saga doesn’t prove that gov­ern­ment employ­ees are bad, mere­ly that humans can tol­er­ate lim­it­less amounts of dys­func­tion, espe­cial­ly when every­one around them is doing the same.

They’re trapped in a sys­tem where all incen­tives point toward the sta­tus quo. Per­na wants Palan­tir to think big­ger, but remem­ber: The Defense Depart­ment can embrace and expe­dite things in the name of nation­al secu­ri­ty that oth­ers can­not. It’s one of the most AI-friend­ly parts of the gov­ern­ment. The chal­lenge then is fix­ing a mas­sive sys­tem that has become con­sti­tu­tion­al­ly resis­tant to solu­tions, par­tic­u­lar­ly ones fueled by tech­nol­o­gy such as arti­fi­cial intel­li­gence. It’s a Mobius strip that no one can seem to straight­en out.

But Karp sees a direct line between Palantir’s expe­ri­ence and the per­il of the cur­rent moment. “Every time I see ordi­nary inter­ac­tions between ordi­nary cit­i­zens and the gov­ern­ment, it’s very high fric­tion for no rea­son,” he says. “And then there’s almost no out­put. For­get the dol­lars spent. Whether it’s immi­gra­tion, health records, tax­a­tion, get­ting your car to work, you’re going to have a bad expe­ri­ence, right? And that bad expe­ri­ence, makes you think, ‘Hmm, noth­ing works here. And because noth­ing works here I’m going to tear down the whole sys­tem.’”

A few months before Palan­tir sued the Unit­ed States in 2016, Eric Schmidt got a call from Defense Sec­re­tary Ash­ton B. Carter. Carter was launch­ing some­thing called the Defense Inno­va­tion Board to try to get more tech think­ing into the Pen­ta­gon. He want­ed Schmidt, then the exec­u­tive chair­man of Google’s par­ent com­pa­ny Alpha­bet, to join. “I declined,” says Schmidt. “And Carter said, ‘Well, you know, do it any­way,’”

I’ve spo­ken with Schmidt sev­er­al times over the years and he’s been about as pre­dictable as a Hol­i­day Inn. But as he recalled his time on the Defense Inno­va­tion Board there was a dif­fer­ent tone, like the guy in a hor­ror movie who’s been chilled by his encounter with a vague­ly threat­en­ing super­nat­ur­al force. The qui­et one who says, “You don’t know what’s out there, man.”

Carter let the Defense Inno­va­tion Board exam­ine every­thing it need­ed to assess how the Pen­ta­gon devel­ops, acquires and uses tech­nol­o­gy — the 99.9 per­cent of the ice­berg that remained out of sight in the Palan­tir court case. Pret­ty quick­ly Schmidt con­clud­ed the entire fed­er­al appa­ra­tus has acci­den­tal­ly mutat­ed into software’s per­fect ene­my. “AI is fun­da­men­tal­ly soft­ware,” says Schmidt. “You can’t have AI in the gov­ern­ment or the mil­i­tary until you solve the prob­lem of soft­ware in the gov­ern­ment and mil­i­tary.”

Most gov­ern­ment projects work back­ward from an out­come — a bridge will be built from point X to point Y and cost Z. Soft­ware is an abstrac­tion mov­ing toward a des­ti­na­tion that’s always chang­ing. Google didn’t cre­ate a search box and then close up shop; it kept spend­ing and staffing because that’s how tech­nol­o­gy gets bet­ter and more usable. Unlike a bridge, soft­ware is nev­er done. Try sell­ing that to bureau­crats who are told they must pay for only what they can doc­u­ment. Schmidt described for me the nor­mal course of soft­ware devel­op­ment — pro­to­typ­ing with a small group of engi­neers, get­ting lots of user feed­back, end­less refine­ment and iter­a­tion.

“Every sin­gle thing I just told you is ille­gal,” Schmidt says. If only this were true. We could then just make things legal and move on. In fact, Con­gress — though hard­ly blame­less — has giv­en the Defense Depart­ment count­less workarounds and spe­cial author­i­ties over the years. Most have been for­got­ten or ignored by pub­lic ser­vants who are too scared to embrace them. Take one of Schmidt’s exam­ples; you real­ly are allowed to con­duct soft­ware user sur­veys, but most staffers at the Office of Infor­ma­tion and Reg­u­la­to­ry Affairs inter­pret the legal guid­ance to mean a six-month review process is required before grant­i­ng per­mis­sion.

A six-month wait for a prod­uct that nev­er stops mov­ing. That means nor­mal soft­ware prac­tices are worse than ille­gal. They’re a form of bureau­crat­ic tor­ture. The Defense Inno­va­tion Board chan­neled its bewil­der­ment into a mas­ter­piece: “Soft­ware is Nev­er Done: Refac­tor­ing the Acqui­si­tion Code for Com­pet­i­tive Advan­tage.” I’m not being iron­ic. It’s the most rea­son­able, styl­ish and solu­tions-based cri­tique of mod­ern gov­ern­ment I’ve ever read.

The authors did the unglam­orous work of going through the infest­ed gar­den of process­es and rules and called out many of the nas­ti­est weeds. Then they made com­mon-sense rec­om­men­da­tions — treat soft­ware as a liv­ing thing that cross­es bud­get lines; do cost assess­ments that pri­or­i­tize speed, secu­ri­ty, func­tion­al­i­ty and code qual­i­ty; col­lect data from the department’s weapons sys­tems and cre­ate a secure repos­i­to­ry to eval­u­ate their effec­tive­ness — and urged Con­gress to pass them.

They also ref­er­enced the dozen pre­vi­ous soft­ware reports com­mis­sioned by the mil­i­tary dat­ing back to 1982, all of which came to sim­i­lar con­clu­sions. The prob­lem isn’t a lack of solu­tions, it’s get­ting Con­gress to approve the polit­i­cal­ly risky ones and “the frozen mid­dle” to imple­ment them: “We ques­tion nei­ther the integri­ty nor the patri­o­tism of this group. They are sim­ply not incen­tivized to the way we believe mod­ern soft­ware should be acquired and imple­ment­ed, and the enor­mous iner­tia they rep­re­sent is a pro­found bar­ri­er to change.”

When soft­ware becomes a cri­sis, politi­cians call Jen­nifer Pahlka. Pahlka was deputy chief tech­nol­o­gy offi­cer in the Oba­ma admin­is­tra­tion and was cru­cial to the res­cue of healthcare.gov — the most flawed, fraught and ulti­mate­ly suc­cess­ful soft­ware project in gov­ern­ment his­to­ry. In 2020, Gavin New­som bat-sig­naled her to untan­gle California’s unem­ploy­ment insur­ance pro­gram as it buck­led under the weight of the covid-19 response. “I come to this work,” says Pahlka, “with the assump­tion that peo­ple are hav­ing a f—— ner­vous break­down.”

Pahlka served with Schmidt on the Defense Inno­va­tion Board, which affirmed decades of her expe­ri­ence at the con­ver­gence of soft­ware and gov­ern­ment. The dys­func­tion loop begins when absurd process­es are giv­en to pub­lic ser­vants who will be judged on their com­pli­ance with absur­di­ty. If they do their jobs right, the nation pur­chas­es obso­lete over­priced soft­ware. If they make a mis­take or take a risk that defies the absur­di­ty, politi­cians hold hear­ings and jump all over them — which is far sim­pler than fix­ing the process.

Each recrim­i­na­tion dri­ves more good peo­ple out of pub­lic ser­vice. Rinse, repeat. What Pahlka has noticed recent­ly is that the wave is crest­ing. More things are break­ing, and the remain­ing com­pe­tent pub­lic ser­vants who under­stand tech­nol­o­gy are just bare­ly hang­ing on. “Most of what I do on a dai­ly basis is like ther­a­py,” Pahlka says. “I tell peo­ple, ‘Those feel­ings you’re hav­ing are nor­mal. The only way to get through them is to share them.’” The ded­i­ca­tion in her excel­lent book, “Recod­ing Amer­i­ca: Why Gov­ern­ment Is Fail­ing in the Dig­i­tal Age and How We Can Do Bet­ter,” said, “To pub­lic ser­vants every­where. Don’t give up.” Pahlka told me, “I’ve had peo­ple come up to me and ask me to sign and they just start cry­ing.” It’s not just the rank and file.

Schmidt end­ed up serv­ing four years on the Defense Inno­va­tion Board. When we were wrap­ping up our con­ver­sa­tion, he took a breath and paused for a moment. “I’m not going to make a more emo­tion­al argu­ment, I’m just going to tell you the fol­low­ing: Gov­ern­ment will per­form sub opti­mal­ly until it adopts the soft­ware prac­tices of the indus­try.” He sound­ed pret­ty emo­tion­al. It did not take some­one with John F. Kennedy’s charis­ma to inspire Amer­i­cans to go to the moon. The moon is big and pret­ty. Human­i­ty has been dream­ing about it for eons. Calvin Coolidge lev­els of charm would have suf­ficed.

The chal­lenge of using AI for bet­ter gov­ern­ment is very dif­fer­ent. The excite­ment about a new thing is tem­pered by fear and con­fu­sionTo get the max­i­mum reward from AI, the coun­try must first go through an unprece­dent­ed veg­etable-eat­ing exer­cise to clean up its bureau­cra­cy. Turn­ing that into poet­ry is hard. There’s no ide­al mes­sen­ger, but an octo­ge­nar­i­an whose best speech­es are about grief and a sep­tu­a­ge­nar­i­an whose speech­es are bare­ly speech­es is per­haps not the opti­mal set of choic­es. … The truth.

The rela­tion­ship between cit­i­zens and gov­ern­ment is frac­tured. It’s cru­cial to the republic’s sur­vival that we stop defend­ing the sta­tus quo. New tech­nol­o­gy can help us repair the dam­age and open the door to a lev­el of ser­vice and effi­cien­cy that will make Scan­di­na­vians seethe with envy. Almost all of this AI tech has been cre­at­ed by Amer­i­can inge­nu­ity inside Amer­i­can com­pa­nies, and the Amer­i­can peo­ple deserve its ben­e­fits.

Next, say the thing Democ­rats don’t want to say: Not every gov­ern­ment job should be a job for life. LLMs can pro­vide bet­ter ser­vice and respon­sive­ness for many day-to-day inter­ac­tions between cit­i­zens and var­i­ous agen­cies. They’re not just cheap­er, they’re also faster, and, when trained right, less prone to error or mis­in­ter­pre­ta­tion. That means it’s pos­si­ble the fed­er­al gov­ern­ment will soon have few­er employ­ees.

But AI will nev­er replace human judg­ment — about ben­e­fits, penal­ties or any­thing in between. It’s a tool to be used by Amer­i­cans to make bet­ter deci­sions for our nation­al well-being. That earns you the right to say the thing rea­son­able Repub­li­cans don’t want to hear: their bluff is going to be called. If they con­tin­ue to indulge the party’s idi­ot­ic fan­tasies of burn­ing the entire fed­er­al appa­ra­tus to the ground, they’ll be left hold­ing the ash­es. They need to admit that a prop­er­ly run gov­ern­ment has an impor­tant role in people’s lives, and they need to co-sign fix­ing it.

With­out cross­ing their fin­gers behind their backs. All this is pre­am­ble to the work — method­i­cal demo­li­tion and joy­ful con­struc­tion. Pahlka says the pol­i­cy guide­lines that gov­ern the Defense Depart­ment equal 100 stacked copies of “War and Peace.” There are more than 7,000 pages of unem­ploy­ment reg­u­la­tions. Luck­i­ly, untan­gling the Unit­ed States’ hair­ball of fine print is the per­fect job for AI. Banks already use it to dedu­pli­cate obso­lete com­pli­ance rules.

Pahlka is work­ing to demon­strate its fea­si­bil­i­ty inside agen­cies. The Pen­ta­gon is exper­i­ment­ing with an AI pro­gram called Gamechang­er that helps bureau­crats nav­i­gate its own bureau­cra­cy. It’s easy to mock, and we’ll still need count­less human hours of over­sight — many of them from Con­gress — to ensure the job’s done right. But it’s exact­ly the kind of hum­ble first step that deserves praise. Tur­bocharge these efforts, then start build­ing. But not every­where, at least not at first.

One of the secrets of great soft­ware is that it’s not built all at once. Projects get bro­ken down into man­age­able units called sprints; teams get feed­back, make adjust­ments in real-time, then use that knowl­edge to tack­le the next sprint. It’s a form of com­mon sense that the indus­try calls agile devel­op­ment. The Unit­ed States should do its first agile AI sprint in its most bro­ken place, where the breach of trust and ser­vices is the most shame­ful.

You like­ly know the sta­tis­tics about Vet­er­ans Affairs but there’s one worth repeat­ing: 6,392 vet­er­ans died by sui­cide in 2021, the most recent year num­bers are avail­able. A ProP­ub­li­ca review of inspec­tor gen­er­al reports found VA employ­ees reg­u­lar­ly “botched screen­ings meant to assess vet­er­ans’ risk of sui­cide or vio­lence; some­times they didn’t per­form the screen­ings at all.” What if we treat VA like the cri­sis it is? It’s not as sim­ple as untan­gling hoses between vet­er­ans and the depart­ment.

A lot of care is man­aged man­u­al­ly. But when we cre­ate dig­i­tal infra­struc­ture, appoint­ment sched­ul­ing can run on AI. A cas­cade of ben­e­fits would fol­low, such as reduced wait times, ana­lyt­ics that pre­dict demand for ser­vices, and auto­mat­ed reminders and fol­low-ups so VA staff can focus on patients over paper­work. Next make a first alert chat­bot for vet­er­ans that, only with their con­sent, can be used to look for signs of cri­sis or sui­ci­dal thoughts, offers cop­ing mech­a­nisms and resources, and esca­lates cas­es to men­tal health providers.

The big one is per­son­al­ized care. Vet­er­ans deserve to be empow­ered with a God view of their own treat­ment, and that data can be anonymized and ana­lyzed for insights into vet­er­an-spe­cif­ic con­di­tions such as post-trau­mat­ic stress dis­or­der and trau­mat­ic brain injuries. Is there risk? There is. Is the risk worse than an aver­age of 18 vet­er­ans killing them­selves each day? I don’t think so.

Let’s give our­selves a count­down clock: One year to make it hap­pen. It’s a prob­lem sim­i­lar in scale, com­plex­i­ty and impor­tance to Oper­a­tion Warp Speed. There’s a grand­pa in Alaba­ma who might be con­vinced to help. There are more ques­tions — part of get­ting AI into gov­ern­ment is real­iz­ing there will be no get­ting it out. It turns out that good soft­ware and good gov­ern­ment are more sim­i­lar than we knew: Nei­ther is ever done.

The past few decades the fed­er­al gov­ern­ment stopped chang­ing. One side tried to crip­ple it while the oth­er respond­ed with smoth­er­ing lev­els of affec­tion and excus­es. These equal and irra­tional forces cre­at­ed sta­sis and decay, but Amer­i­can lives kept mov­ing for­ward with new needs and expec­ta­tions. This new era of AI has pre­sent­ed a once-in-a-cen­tu­ry chance to wipe away a lot of the dam­age and renew the mis­sion. Not to the moon, but to a more per­fect union. 

Discussion

No comments for “A Terrible Blast From the Recent Past and A Terrifying Glimpse Into the Future”

Post a comment