π Digital π Science and technology
What are the next challenges for AI?

The future of brain-machine synchronisation

with Hamilton Mann, Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD, Cornelia C. Walther, Senior Visiting Scientist at Wharton Initiative for Neuroscience (WiN) and Michael Platt, Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania
On October 30th, 2024 |
9 min reading time
Hamilton Mann
Hamilton Mann
Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD
Cornelia C. Walther
Cornelia C. Walther
Senior Visiting Scientist at Wharton Initiative for Neuroscience (WiN)
Michael Platt
Michael Platt
Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania
Key takeaways
  • The evolution of AI represents a breakthrough in the relationship between humans and machines.
  • AI is now capable of generating responses similar to those of humans and adapting to the contexts of their interactions.
  • Advances such as brain-computer interfaces (BCIs) make it possible for AI to connect with human thoughts and emotions.
  • Neuroscience can also guide the development of AI, for example through the alternative of neuromorphic computing.
  • Despite their positive implications, the human-machine relationship raises major ethical issues, notably concerning data confidentiality and the preservation of human autonomy.

The remark­able evol­u­tion of Arti­fi­cial Intel­li­gence (AI) sys­tems rep­res­ents a paradigm shift in the rela­tion­ship between humans and machines. This trans­form­a­tion is evid­ent in the seam­less inter­ac­tions facil­it­ated by these advanced sys­tems, where adapt­ab­il­ity emerges as a defin­ing char­ac­ter­ist­ic, res­on­at­ing with the fun­da­ment­al human capa­city to learn from exper­i­ence and pre­dict behaviour.

AI mimics human learning

One facet of AI that aligns closely with human cog­nit­ive pro­cesses is Rein­force­ment Learn­ing (RL). RL mim­ics the human learn­ing paradigm by allow­ing AI sys­tems to learn through inter­ac­tion with an envir­on­ment, receiv­ing feed­back in the form of rewards or pen­al­ties. By con­trast, Large Lan­guage Mod­els (LLMs) play a cru­cial role in pat­tern recog­ni­tion, cap­tur­ing the intric­ate nuances of human lan­guage and beha­viour. These mod­els, such as Chat­G­PT and BERT, excel in under­stand­ing con­tex­tu­al inform­a­tion, grasp­ing the sub­tleties of lan­guage, and pre­dict­ing user intent. Lever­aging vast data­sets, LLMs acquire a com­pre­hens­ive under­stand­ing of lin­guist­ic pat­terns, enabling them to gen­er­ate human-like responses and adapt to some of the user beha­viour, some­times with remark­able accuracy.

The syn­ergy between RL and LLMs cre­ates a power­ful pre­dict­or of human beha­viour. RL con­trib­utes the abil­ity to learn from inter­ac­tions and adapt, while LLMs enhance the pre­dic­tion cap­ab­il­it­ies through pat­tern recog­ni­tion. AI sys­tems based on RL can thus dis­play a form of beha­vi­our­al syn­chrony. At its core, RL enables AI sys­tems to learn optim­al sequences of actions in inter­act­ive envir­on­ments to achieve a policy. Ana­log­ous to a child touch­ing a hot sur­face and learn­ing to avoid it, these AI sys­tems adapt based on the pos­it­ive or neg­at­ive feed­back they receive.

AI replicates human interactions

AI agents using deep rein­force­ment learn­ing, such as Google Deep­Mind’s AlphaZero, learn and improve by play­ing mil­lions of games against them­selves, thereby refin­ing their strategies over time. This self-improve­ment pro­cess in AI involves an agent iter­at­ively learn­ing from its own actions and out­comes. Sim­il­arly, in human inter­ac­tions, brain syn­chrony occurs when indi­vidu­als engage in cooper­at­ive tasks, lead­ing to aligned pat­terns of brain activ­ity that facil­it­ate shared under­stand­ing and col­lab­or­a­tion. Unlike AI, humans achieve this syn­chrony through inter­ac­tion with oth­ers rather than themselves.

What’s more, AI sys­tems can also learn from inter­ac­tions with humans. Just as human brain syn­chrony enhances cooper­a­tion and under­stand­ing, AI sys­tems can improve and align their responses through extens­ive iter­at­ive learn­ing from human inter­ac­tions. While AI sys­tems do not lit­er­ally share know­ledge as human brains do, they become repos­it­or­ies of data inher­ited from these inter­ac­tions, which cor­res­ponds to a form of know­ledge. This pro­cess of learn­ing from vast data­sets, includ­ing human inter­ac­tions, can be seen as a form of ‘col­lect­ive memory’. This ana­logy high­lights the poten­tial for AI sys­tems to evolve while being influ­enced by humans, while also influ­en­cing humans through their use, indic­at­ing a form of ‘com­pu­ta­tion­al syn­chrony’ that could be seen as an ana­logue to human brain synchrony.

In addi­tion, AI sys­tems enabled with social cue recog­ni­tion are being designed to detect and respond to human emo­tions. These ‘Affect­ive Com­put­ing’ sys­tems, as coined by Ros­alind Picard in 19951, can inter­pret human facial expres­sions, voice mod­u­la­tions, and even text to gauge emo­tions and then respond accord­ingly. An AI assist­ant that can detect user frus­tra­tion in real-time and adjust its responses or assist­ance strategy is a rudi­ment­ary form of beha­vi­our­al syn­chron­isa­tion based on imme­di­ate feedback.

For instance, affect­ive com­put­ing encom­passes tech­no­lo­gies like emo­tion recog­ni­tion soft­ware that ana­lyses facial expres­sions and voice tone to determ­ine a person’s emo­tion­al state. Real-time sen­ti­ment ana­lys­is in text and voice allows AI to adjust its inter­ac­tions to be more empath­et­ic and effect­ive. This cap­ab­il­ity is increas­ingly used in cus­tom­er ser­vice chat­bots and vir­tu­al assist­ants to improve user exper­i­ence by mak­ing inter­ac­tions feel more nat­ur­al and responsive.

Just as humans adjust their beha­viour in response to social cues, adapt­ive AI sys­tems modi­fy their actions based on user input, poten­tially lead­ing to a form of ‘syn­chron­isa­tion’ over time. Assess­ing the social com­pet­ence of such an AI sys­tem could be done by adapt­ing tools like the Social Respons­ive­ness Scale (SRS)—a well-val­id­ated psy­chi­at­ric instru­ment that meas­ures how adept an indi­vidu­al is at modi­fy­ing their beha­viour to fit the beha­viour and dis­pos­i­tion of a social part­ner, a proxy for ‘the­ory of mind,’ which refers to the abil­ity to attrib­ute men­tal states—such as beliefs, intents, desires, emo­tions, and knowledge—to one­self and to others.

Moving towards resonance

Brain-Com­puter Inter­faces (BCIs) have ushered in a trans­form­at­ive era in which thoughts can be trans­lated into digit­al com­mands and human com­mu­nic­a­tion. Com­pan­ies like Neur­alink are mak­ing strides devel­op­ing inter­faces that enable para­lysed indi­vidu­als to con­trol devices dir­ectly with their thoughts. Con­nect­ing dir­ect record­ings of brain activ­ity with AI sys­tems, research­ers enabled an indi­vidu­al to speak at nor­mal con­ver­sa­tion­al speed after being mute for more than a dec­ade fol­low­ing a stroke. AI sys­tems can also be used to decode not only what an indi­vidu­al is read­ing but what they are think­ing based on non-invas­ive meas­ures of brain activ­ity using func­tion­al MRI.

Based on these advances, it’s not far-fetched to ima­gine a future scen­ario in which a pro­fes­sion­al uses a non-invas­ive BCI (e.g., wear­able brain­wave mon­it­ors such as Cog­wear, Emotiv, or Muse) to com­mu­nic­ate with AI design soft­ware. The soft­ware, recog­nising the designer’s neur­al pat­terns asso­ci­ated with cre­ativ­ity or dis­sat­is­fac­tion, could instant­an­eously adjust its design pro­pos­als, achiev­ing a level of syn­chrony pre­vi­ously thought to be the realm of sci­ence fic­tion. This tech­no­lo­gic­al fron­ti­er holds the prom­ise of a dis­tinct­ive form of syn­chrony, where the inter­play between the human brain and AI tran­scends mere com­mand inter­pret­a­tion, open­ing up a future in which AI res­on­ates with human thoughts and emotions.

Cru­cially, the res­on­ance envi­sioned here tran­scends the beha­vi­our­al domain to encom­pass com­mu­nic­a­tion as well. As BCIs evolve, the poten­tial for out­ward expres­sions becomes pivotal. Bey­ond mere com­mand exe­cu­tion, the integ­ra­tion of facial cues, tone of voice, and oth­er non-verbal cues into AI’s responses amp­li­fies the chan­nels for res­on­ance. This expan­sion into mul­timod­al com­mu­nic­a­tion may enrich syn­chrony by cap­tur­ing ele­ments from the hol­ist­ic nature of human expres­sion, cre­at­ing a more immers­ive and nat­ur­al interaction.

How­ever, the concept of res­on­ance also presents the chal­lenge of nav­ig­at­ing the uncanny val­ley, a phe­nomen­on where humanoid entit­ies that closely resemble humans pro­voke dis­com­fort. Strik­ing the right bal­ance is para­mount, ensur­ing the AI’s respons­ive­ness aligns authen­tic­ally with human expres­sions, without enter­ing the dis­com­fit­ing realm of the uncanny val­ley. The poten­tial of BCIs to foster syn­chrony between the human brain and AI intro­duces prom­ising yet chal­len­ging pro­spects for human-com­puter collaboration.

Turning to neuroscience

Neur­os­cience not only illu­min­ates the basis of bio­lo­gic­al intel­li­gence but may also guide devel­op­ment of arti­fi­cial intel­li­gence2. Con­sid­er­ing evol­u­tion­ary con­straints like space and com­mu­nic­a­tion effi­ciency, which have shaped the emer­gence of effi­cient sys­tems in nature, prompts explor­a­tion of embed­ding sim­il­ar con­straints in AI sys­tems, envi­sion­ing organ­ic­ally evolving arti­fi­cial envir­on­ments optim­ised for effi­ciency and envir­on­ment­al sus­tain­ab­il­ity, the focus of research in so-called “neur­omorph­ic computing.” 

For example, oscil­lat­ory neur­al activ­ity appears to boost com­mu­nic­a­tion between dis­tant brain areas. The brain employs a theta-gamma rhythm to pack­age and trans­mit inform­a­tion, sim­il­ar to a postal ser­vice, thereby enhan­cing effi­cient data trans­mis­sion and retriev­al3. This inter­play has been likened to an advanced data trans­mis­sion sys­tem, where low-fre­quency alpha and beta brain waves sup­press neur­al activ­ity asso­ci­ated with pre­dict­able stim­uli, allow­ing neur­ons in sens­ory regions to high­light unex­pec­ted stim­uli via high­er-fre­quency gamma waves. Bas­tos et al.4 found that inhib­it­ory pre­dic­tions car­ried by alpha/beta waves typ­ic­ally flow back­ward through deep­er cor­tic­al lay­ers, while excit­at­ory gamma waves con­vey­ing inform­a­tion about nov­el stim­uli propag­ate for­ward through super­fi­cial layers.

Recent AI exper­i­ments, par­tic­u­larly those involving OpenAI’s GPT‑4, unveil intriguing par­al­lels with evol­u­tion­ary learning.

In the mam­mali­an brain, sharp wave ripples (SPW-Rs) exert wide­spread excit­at­ory influ­ence through­out the cor­tex and mul­tiple sub­cor­tic­al nuc­lei5. With­in these SPW-Rs, neur­on­al spik­ing is metic­u­lously orches­trated both tem­por­ally and spa­tially by interneur­ons, facil­it­at­ing the con­densed react­iv­a­tion of seg­ments from wak­ing neur­on­al sequences6. This orches­trated activ­ity aids in the trans­mis­sion of com­pressed hip­po­cam­pal rep­res­ent­a­tions to dis­trib­uted cir­cuits, thereby rein­for­cing the pro­cess of memory con­sol­id­a­tion7.

Recent AI exper­i­ments, par­tic­u­larly those involving OpenAI’s GPT‑4, unveil intriguing par­al­lels with evol­u­tion­ary learn­ing. Unlike tra­di­tion­al task-ori­ented train­ing, GPT‑4 learns from extens­ive data­sets, refin­ing its responses based on the accu­mu­lated ‘exper­i­ences’ – fur­ther­more pat­tern recog­ni­tion by GPTs par­al­lels pat­tern recog­ni­tion by lay­ers of neur­ons in the brain. This approach mir­rors the adapt­ab­il­ity observed in nat­ur­al evol­u­tion, where organ­isms refine their beha­viours over time to bet­ter res­on­ate with their environment.

From Brain Waves to AI Frequencies

Draw­ing inspir­a­tion from the archi­tec­ture of the brain, neur­al net­works in AI are con­struc­ted with nodes organ­ised in lay­ers that respond to inputs and then gen­er­ate out­puts. In the realm of human neur­al syn­chrony research, invest­ig­at­ing the role of oscil­la­tions has proven to be a pivotal area of interest. High-fre­quency oscil­lat­ory neur­al activ­ity stands out as a cru­cial ele­ment, demon­strat­ing its abil­ity to facil­it­ate com­mu­nic­a­tion between dis­tant brain areas. A par­tic­u­larly intriguing phe­nomen­on in this con­text is the theta-gamma neur­al code, show­cas­ing how our brains employ a dis­tinct­ive meth­od of ‘pack­aging’ and ‘trans­mit­ting’ inform­a­tion, remin­is­cent of a postal ser­vice metic­u­lously wrap­ping pack­ages for effi­cient deliv­ery. This neur­al ‘pack­aging’ sys­tem orches­trates spe­cif­ic rhythms, akin to a coordin­ated dance, ensur­ing the stream­lined trans­mis­sion of inform­a­tion, and it is encap­su­lated in what is known as the theta-gamma rhythm.

This per­spect­ive aligns with the concept of “neur­omorph­ic com­put­ing,” where AI archi­tec­ture is based on neur­al cir­cuitry. The key advant­age of neur­omorph­ic com­put­ing lies in its com­pu­ta­tion­al effi­ciency, address­ing the sig­ni­fic­ant energy con­sump­tion chal­lenges faced by tra­di­tion­al AI mod­els. The train­ing of large AI mod­els, such as those used in nat­ur­al lan­guage pro­cessing or image recog­ni­tion, can con­sume an exor­bit­ant amount of energy. For instance, train­ing a single AI mod­el can emit as much car­bon diox­ide as five cars over their entire lifespan8. Moreover, research­ers at the Uni­ver­sity of Mas­sachu­setts, Amh­erst, found that the car­bon foot­print of train­ing deep learn­ing mod­els has been doub­ling approx­im­ately every 3.5 months, far out­pa­cing improve­ments in com­pu­ta­tion­al effi­ciency9.

Neur­omorph­ic com­put­ing offers a prom­ising altern­at­ive. By mim­ick­ing the archi­tec­ture of the human brain, neur­omorph­ic sys­tems aim to achieve high­er com­pu­ta­tion­al effi­ciency and lower energy con­sump­tion com­pared to con­ven­tion­al AI archi­tec­tures10. For example, IBM’s TrueNorth neur­omorph­ic chip has demon­strated sig­ni­fic­ant orders of mag­nitude in energy effi­ciency com­pared to tra­di­tion­al CPUs and GPUs11. Addi­tion­ally, neur­omorph­ic com­put­ing archi­tec­tures are inher­ently suited for low-power, real-time pro­cessing tasks, mak­ing them ideal for applic­a­tions like edge com­put­ing and autonom­ous sys­tems, fur­ther con­trib­ut­ing to energy sav­ings and envir­on­ment­al sustainability.

Implications for society

In the realm of train­ing and skill devel­op­ment, syn­chron­ised AI has the poten­tial to per­son­al­ise learn­ing exper­i­ences based on an employ­ee’s unique learn­ing curve, facil­it­at­ing faster and more effect­ive skill acquis­i­tion. From a cus­tom­er engage­ment stand­point, syn­chron­ised AI inter­faces might more pre­cisely under­stand and, in some cases, anti­cip­ate user expect­a­tions based on advanced beha­vi­our­al patterns.

For oper­a­tion­al effi­ciency, espe­cially in sec­tors like man­u­fac­tur­ing or logist­ics, AI sys­tems work­ing in coordin­a­tion with each oth­er can optim­ise pro­cesses, reduce waste, and strengthen the sup­ply chain. This would lead to increased prof­it­ab­il­ity, with an ever-met great­er abil­ity for sus­tain­ab­il­ity con­sid­er­a­tions integ­rated. In risk man­age­ment, syn­chron­ised AI sys­tems ana­lys­ing vast data­sets col­lab­or­at­ively might bet­ter pre­dict poten­tial risks or mar­ket down­turns, equip­ping busi­nesses and oth­er organ­isa­tions to pre­pare or pivot before a crisis emerges to lim­it all related social and soci­et­al impact. Like­wise, syn­chron­ised AI sys­tems could provide insights for more effi­cient urb­an plan­ning and envir­on­ment­al pro­tec­tion strategies. This could lead to bet­ter traffic man­age­ment, energy con­ser­va­tion, and pol­lu­tion con­trol, enhan­cing the qual­ity of life in urb­an areas.

In vari­ous domains bey­ond busi­ness, deploy­ment of AI with a proso­cial ori­ent­a­tion holds immense poten­tial for the well-being of human­ity and the plan­et. Par­tic­u­larly in health­care, syn­chron­isa­tion between the human brain and AI sys­tems could ush­er in a revolu­tion­ary era for patient care and med­ic­al research. Recent stud­ies high­light the pos­it­ive impact of clini­cians syn­chron­ising their move­ments with patients, thereby increas­ing trust, and redu­cing pain. Extend­ing this concept to AI chat­bots or AI-enabled robot­ic care­givers that are syn­chron­ised with those under their ‘care’ holds the prom­ise of enhan­cing patient exper­i­ence and improv­ing out­comes, as evid­enced by recent research indic­at­ing that LLMs out­per­formed phys­i­cians in dia­gnos­ing ill­nesses, and patients pre­ferred their interaction.

In the edu­ca­tion­al domain, the integ­ra­tion of AI sys­tems with a focus on syn­chrony is equally prom­ising. Research demon­strated that syn­chron­ized brain waves in high school classrooms were pre­dict­ive of high­er per­form­ance and hap­pi­ness among stu­dents12. This study under­scores the sig­ni­fic­ance of neur­al syn­chrony in the learn­ing envir­on­ment. By lever­aging AI tutor­ing sys­tems cap­able of detect­ing and respond­ing to stu­dents’ cog­nit­ive states in real-time, edu­ca­tion tech­no­logy can poten­tially rep­lic­ate the pos­it­ive out­comes observed in syn­chron­ised classroom set­tings.  Incor­por­a­tion of AI sys­tems that res­on­ate with stu­dents’ brain states has the poten­tial to cre­ate a more con­du­cive and effect­ive learn­ing atmo­sphere, optim­iz­ing engage­ment and fos­ter­ing pos­it­ive learn­ing outcomes.

Perspectives and Potential

The excite­ment sur­round­ing the pro­spects of brain-to-machine and machine-to-machine syn­chrony brings with it a set of para­mount con­cerns that neces­sit­ate scru­tiny and that are all but tech­nic­al. Data pri­vacy emerges as a crit­ic­al appre­hen­sion, giv­en the intim­ate nature of neur­al inform­a­tion being pro­cessed by these sys­tems. The eth­ic­al dimen­sions of such syn­chron­isa­tion, par­tic­u­larly in the realm of AI decision-mak­ing, present com­plex chal­lenges that require care­ful con­sid­er­a­tion1314.

Expand­ing on these con­cerns, two over­arch­ing issues demand heightened atten­tion. Firstly, the pre­ser­va­tion of human autonomy stands as a found­a­tion­al prin­ciple. As we delve into the era of brain-machine syn­chrony, it becomes imper­at­ive to ensure that indi­vidu­als retain their abil­ity to make informed choices. Avoid­ing scen­ari­os where indi­vidu­als feel coerced or manip­u­lated by tech­no­logy is cru­cial in uphold­ing eth­ic­al standards.

Secondly, the ques­tion of equity in access to these tech­no­lo­gies emerges as a press­ing mat­ter. Cur­rently, such advanced tech­no­lo­gies are often costly and may not be access­ible to all seg­ments of soci­ety. This raises con­cerns about exacer­bat­ing exist­ing inequal­it­ies15. A scen­ario where only cer­tain priv­ileged groups can har­ness the bene­fits of brain-machine syn­chrony might deep­en soci­et­al divides. Moreover, the lack of aware­ness about these tech­no­lo­gies fur­ther com­pounds issues of equit­able access16.

The integ­ra­tion of AI with human cog­ni­tion marks the threshold of an unpre­ced­en­ted era, where machines not only rep­lic­ate human intel­li­gence but also mir­ror intric­ate beha­vi­our­al pat­terns and emo­tions. The poten­tial syn­chron­isa­tion of AI with human intent and emo­tion holds the prom­ise of rede­fin­ing the nature of human-machine col­lab­or­a­tion and, per­haps, even the essence of the human con­di­tion. The out­come of har­mon­ising humans and machines will sig­ni­fic­antly impact human­ity and the plan­et, con­tin­gent upon the guid­ing human aspir­a­tions in this pur­suit, and open oppor­tun­it­ies for an advanced human-centered AI exper­i­ence, in a “Fusion Mode”, as coined in the “Arti­fi­cial Integ­rity” concept. This raises a time­less ques­tion, rever­ber­at­ing through the course of human his­tory: what do we value, and why?

A cru­cial point to emphas­ise is that the implic­a­tions of syn­chron­ising humans and machines extend far bey­ond the realm of AI experts; it encom­passes every indi­vidu­al. This under­scores the neces­sity to raise aware­ness and engage the pub­lic at every stage of this trans­form­at­ive jour­ney. As the devel­op­ment of AI pro­gresses, it is essen­tial to ensure that the eth­ic­al, soci­et­al, and exist­en­tial dimen­sions are shaped by col­lect­ive val­ues and reflec­tions, avoid­ing uni­lat­er­al decisions by Big Tech that may not align with the broad­er interests of human­ity. What hap­pens next shapes our indi­vidu­al and col­lect­ive future. Get­ting it right is our shared responsibility.

1Picard, R. W. (1995). "Affect­ive Com­put­ing." MIT Media Labor­at­ory Per­cep­tu­al Com­put­ing Sec­tion.
2Achter­berg, J., Akarca, D., Strouse, D.J. et al. Spa­tially embed­ded recur­rent neur­al net­works reveal wide­spread links between struc­tur­al and func­tion­al neur­os­cience find­ings. Nature Machine Intel­li­gence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023–00748‑9
3Lis­man, J. E., & Idiart, M. A. (1995). Stor­age of 7 +/- 2 short-term memor­ies in oscil­lat­ory sub­cycles. Sci­ence, 267(5203), 1512–1515. [DOI: 10.1126/science.7878473]
4Bas­tos, A. M., Lun­dqv­ist, M., Waite, A. S., & Miller, E. K. (2020). Lay­er and rhythm spe­cificity for pre­dict­ive rout­ing. Pro­ceed­ings of the Nation­al Academy of Sci­ences, 117(49), 31459–31469. [https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​2​0​1​4​8​68117]
5Buz­sáki G. (2015). Hip­po­cam­pal sharp wave-ripple: A cog­nit­ive bio­mark­er for epis­od­ic memory and plan­ning. Hip­po­cam­pus. 2015 Oct;25(10):1073–188. doi: 10.1002/hipo.22488. PMID: 26135716; PMCID: PMC4648295.
6O’Neill, J., Boc­cara, C. N., Stella, F., Schoen­en­ber­ger, P., & Csic­svari, J. (2008). Super­fi­cial lay­ers of the medi­al entorhin­al cor­tex replay inde­pend­ently of the hip­po­cam­pus. Sci­ence, 320(5879), 129–133.
7Ego-Sten­gel, V. ; Wilson, M. A. (2010). Dis­rup­tion of ripple-asso­ci­ated hip­po­cam­pal activ­ity dur­ing rest impairs spa­tial learn­ing in the rat. Hip­po­cam­pus, 20(1), 1–10.
8Stru­bell, E., Ganesh, A., McCal­lum, A. (2019). Energy and policy con­sid­er­a­tions for deep learn­ing in NLP. Pro­ceed­ings of the 57th Annu­al Meet­ing of the Asso­ci­ation for Com­pu­ta­tion­al Lin­guist­ics, 3645–3650. https://​doi​.org/​1​0​.​1​8​6​5​3​/​v​1​/​P​1​9​-1356
9Schwartz, R., Dodge, J., Smith, N. A., Over­ton, J., & Varsh­ney, L. R. (2019). Green AI. Pro­ceed­ings of the AAAI Con­fer­ence on Arti­fi­cial Intel­li­gence, 33, 9342–9350. https://​doi​.org/​1​0​.​1​6​0​9​/​a​a​a​i​.​v​3​3​i​0​1​.​3​3​0​19342
10Furber, S. B., Gal­luppi, F., Temple, S., Plana, L. A. (2014). The SpiN­Naker Pro­ject. Pro­ceed­ings of the IEEE, 102(5), 652–665. https://​doi​.org/​1​0​.​1​1​0​9​/​J​P​R​O​C​.​2​0​1​4​.​2​3​04638
11Mer­olla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cas­sidy, A. S., Sawada, J., Akopy­an, F., … Modha, D. S. (2014). A mil­lion spik­ing-neur­on integ­rated cir­cuit with a scal­able com­mu­nic­a­tion net­work and inter­face. Sci­ence, 345(6197), 668–673. https://​doi​.org/​1​0​.​1​1​2​6​/​s​c​i​e​n​c​e​.​1​2​54642
12Dik­ker, S., Wan, L., Dav­i­desco, I., Kag­gen, L., Oostrik, M., McClin­tock, J., … & Poep­pel, D. (2017). Brain-to- brain syn­chrony tracks real-world dynam­ic group inter­ac­tions in the classroom. Cur­rent Bio­logy, 27(9), 1375–1380.
13Dignum, V. (2018). Respons­ible Arti­fi­cial Intel­li­gence: How to Devel­op and Use AI in a Respons­ible Way. AI & Soci­ety, 33(3), 475–476. https://doi.org/10.1007/s00146-018‑0812‑0
14Flor­idi, L., Cowls, J., Beltrametti, M., Chat­ila, R., Chazer­and, P., Dignum, V., Luet­ge, C., Madelin, R.,Pagallo, U., Rossi, F., Schafer, B., Val­cke, P., & Vay­ena, E. (2018). AI4People—An Eth­ic­al Frame­work for a Good AI Soci­ety: Oppor­tun­it­ies, Risks, Prin­ciples, and Recom­mend­a­tions. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018‑9482‑5
15Diako­poulos, N. (2016). Account­ab­il­ity in Algorithmic Decision Mak­ing. Com­mu­nic­a­tions of the ACM, 59(2), 56–62. https://​doi​.org/​1​0​.​1​1​4​5​/​2​8​44148
16Kostkova, P., Brew­er, H., de Lusig­nan, S., Fot­trell, E., Gol­dacre, B., Hart, G., Koczan, P., Knight, P., Mar­so­lier, C., McKendry, R. A., Ross, E., Sas­se, A., Sul­li­van, R., Chayt­or, S., Steven­son, O., Velho, R., Tooke, J., & Ross, E. (2016). Who Owns the Data? Open Data for Health­care. Fron­ti­ers in Pub­lic Health, 4. https://​doi​.org/​1​0​.​3​3​8​9​/​f​p​u​b​h​.​2​0​1​6​.​00107

Support accurate information rooted in the scientific method.

Donate