ENISA:2023年AI和標(biāo)準(zhǔn)化網(wǎng)絡(luò)安全報告_第1頁
ENISA:2023年AI和標(biāo)準(zhǔn)化網(wǎng)絡(luò)安全報告_第2頁
ENISA:2023年AI和標(biāo)準(zhǔn)化網(wǎng)絡(luò)安全報告_第3頁
ENISA:2023年AI和標(biāo)準(zhǔn)化網(wǎng)絡(luò)安全報告_第4頁
ENISA:2023年AI和標(biāo)準(zhǔn)化網(wǎng)絡(luò)安全報告_第5頁
已閱讀5頁,還剩61頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

EUROPEANUNIONAGENCY

FORCYBERSECURITY

enIsa

CYBERSECURITY

OFAIAND

STANDARDISATION

MARCH2023

CYBERSECURITYOFAIANDSTANDARDISATION

1

ABBREVIATIONS

Abbreviation

AI

Definition

ArtificialIntelligence

CEN-

CENELEC

EuropeanCommitteeforStandardisation-EuropeanCommitteeforElectrotechnicalStandardisation

CIA

Confidentiality,IntegrityandAvailability

EN

EuropeanStandard

ESO

EuropeanStandardisationOrganisation

ETSI

EuropeanTelecommunicationsStandardsInstitute

GR

GroupReport

ICT

InformationAndCommunicationsTechnology

ISG

IndustrySpecificationGroup

ISO

InternationalOrganizationforStandardization

IT

InformationTechnology

JTC

JointTechnicalCommittee

ML

MachineLearning

NIST

NationalInstituteofStandardsandTechnology

R&D

ResearchAndDevelopment

SAI

SecurityofArtificialIntelligence

SC

Subcommittee

SDO

Standards-DevelopingOrganisation

TR

TechnicalReport

TS

TechnicalSpecifications

WI

WorkItem

2

ABOUTENISA

TheEuropeanUnionAgencyforCybersecurity,ENISA,istheUnion’sagencydedicatedto

achievingahighcommonlevelofcybersecurityacrossEurope.Establishedin2004and

strengthenedbytheEUCybersecurityAct,theEuropeanUnionAgencyforCybersecurity

contributestoEUcyberpolicy,enhancesthetrustworthinessofICTproducts,servicesand

processeswithcybersecuritycertificationschemes,cooperateswithMemberStatesandEU

bodies,andhelpsEuropeprepareforthecyberchallengesoftomorrow.Throughknowledge

sharing,capacitybuildingandawarenessraising,theAgencyworkstogetherwithitskey

stakeholderstostrengthentrustintheconnectedeconomy,toboostresilienceoftheUnion’s

infrastructure,and,ultimately,tokeepEurope’ssocietyandcitizensdigitallysecure.More

informationaboutENISAanditsworkcanbefoundhere:

www.enisa.europa.eu.

CONTACT

Forcontactingtheauthorspleaseuse

team@enisa.europa.eu

Formediaenquiriesaboutthispaper,pleaseuse

press@enisa.europa.eu.

AUTHORS

P.Bezombes,S.Brunessaux,S.Cadzow

EDITOR(S)

ENISA:

E.Magonara

S.Gorniak

P.Magnabosco

E.Tsekmezoglou

ACKNOWLEDGEMENTS

WewouldliketothanktheJointResearchCentreandtheEuropeanCommissionfortheiractive

contributionandcommentsduringthedraftingstage.Also,wewouldliketothanktheENISAAd

HocExpertGrouponArtificialIntelligence(AI)cybersecurityforthevaluablefeed-backand

commentsinvalidatingthisreport.

3

LEGALNOTICE

ThispublicationrepresentstheviewsandinterpretationsofENISA,unlessstatedotherwise.It

doesnotendorsearegulatoryobligationofENISAorofENISAbodiespursuanttothe

Regulation(EU)No2019/881.

ENISAhastherighttoalter,updateorremovethepublicationoranyofitscontents.Itis

intendedforinformationpurposesonlyanditmustbeaccessiblefreeofcharge.Allreferences

toitoritsuseasawholeorpartiallymustcontainENISAasitssource.

Third-partysourcesarequotedasappropriate.ENISAisnotresponsibleorliableforthecontent

oftheexternalsourcesincludingexternalwebsitesreferencedinthispublication.

NeitherENISAnoranypersonactingonitsbehalfisresponsiblefortheusethatmightbemade

oftheinformationcontainedinthispublication.

ENISAmaintainsitsintellectualpropertyrightsinrelationtothispublication.

COPYRIGHTNOTICE

?EuropeanUnionAgencyforCybersecurity(ENISA),2023

ThispublicationislicencedunderCC-BY4.0“Unlessotherwisenoted,thereuseofthis

documentisauthorisedundertheCreativeCommonsAttribution4.0International(CCBY4.0)

licence

/licenses/by/4.0/

).Thismeansthatreuseisallowed,

providedthatappropriatecreditisgivenandanychangesareindicated”.

Coverimage?.

ForanyuseorreproductionofphotosorothermaterialthatisnotundertheENISAcopyright,

permissionmustbesoughtdirectlyfromthecopyrightholders.

ISBN978-92-9204-616-3,DOI10.2824/277479,TP-03-23-011-EN-C

4

TABLEOFCONTENTS

1.INTRODUCTION

8

1.1DOCUMENTPURPOSEANDOBJECTIVES

8

1.2TARGETAUDIENCEANDPREREQUISITES

8

1.3STRUCTUREOFTHESTUDY

8

2.SCOPEOFTHEREPORT:DEFINITIONOFAIANDCYBERSECURITYOFAI

9

2.1ARTIFICIALINTELLIGENCE

9

2.2CYBERSECURITYOFAI

10

3.STANDARDISATIONINSUPPORTOFCYBERSECURITYOFAI12

3.1RELEVANTACTIVITIESBYTHEMAINSTANDARDS-DEVELOPINGORGANISATIONS

12

3.1.1CEN-CENELEC

12

3.1.2ETSI

13

3.1.3ISO-IEC

14

3.1.4Others

14

4.ANALYSISOFCOVERAGE

16

4.1STANDARDISATIONINSUPPORTOFCYBERSECURITYOFAI-NARROWSENSE

16

4.2STANDARDISATIONINSUPPORTOFTHECYBERSECURITYOFAI-TRUSTWORTHINESS19

4.3CYBERSECURITYANDSTANDARDISATIONINTHECONTEXTOFTHEDRAFTAIACT

21

5.CONCLUSIONS

24

5.1WRAP-UP

24

5.2RECOMMENDATIONS

25

5.2.1Recommendationstoallorganisations

25

5.2.2Recommendationstostandards-developingorganisations

25

5.2.3RecommendationsinpreparationfortheimplementationofthedraftAIAct

25

5.3FINALOBSERVATIONS

26

AANNEX:

27

A.1SELECTIONOFISO27000SERIESSTANDARDSRELEVANTTOTHECYBERSECURITYOFAI27

5

A.2RELEVANTISO/IECSTANDARDSPUBLISHEDORPLANNED/UNDERDEVELOPMENT29

A.3CEN-CENELECJOINTTECHNICALCOMMITTEE21ANDDRAFTAIACTREQUIREMENTS31

A.4ETSIACTIVITIESANDDRAFTAIACTREQUIREMENTS33

6

EXECUTIVESUMMARY

Theoverallobjectiveofthepresentdocumentistoprovideanoverviewofstandards(existing,

beingdrafted,underconsiderationandplanned)relatedtothecybersecurityofartificial

intelligence(AI),assesstheircoverageandidentifygapsinstandardisation.Itdoessoby

consideringthespecificitiesofAI,andinparticularmachinelearning,andbyadoptingabroad

viewofcybersecurity,encompassingboththe‘traditional’confidentiality–integrity–availability

paradigmandthebroaderconceptofAItrustworthiness.Finally,thereportexamineshow

standardisationcansupporttheimplementationofthecybersecurityaspectsembeddedinthe

proposedEUregulationlayingdownharmonisedrulesonartificialintelligence(COM(2021)206

final)(draftAIAct).

ThereportdescribesthestandardisationlandscapecoveringAI,bydepictingtheactivitiesofthe

mainStandards-DevelopingOrganisations(SDOs)thatseemtobeguidedbyconcernabout

insufficientknowledgeoftheapplicationofexistingtechniquestocounterthreatsand

vulnerabilitiesarisingfromAI.Thisresultsintheongoingdevelopmentofadhocreportsand

guidance,andofadhocstandards.

Thereportarguesthatexistinggeneralpurposetechnicalandorganisationalstandards(suchas

ISO-IEC27001andISO-IEC9001)cancontributetomitigatingsomeoftherisksfacedbyAI

withthehelpofspecificguidanceonhowtheycanbeappliedinanAIcontext.This

considerationstemsfromthefactthat,inessence,AIissoftwareandthereforesoftware

securitymeasurescanbetransposedtotheAIdomain.

Thereportalsospecifiesthatthisapproachisnotexhaustiveandthatithassomelimitations.

Forexample,whilethereportfocusesonsoftwareaspects,thenotionofAIcanincludeboth

technicalandorganisationalelementsbeyondsoftware,suchashardwareorinfrastructure.

Otherexamplesincludethefactthatdeterminingappropriatesecuritymeasuresreliesona

system-specificanalysis,andthefactthatsomeaspectsofcybersecurityarestillthesubjectof

researchanddevelopment,andthereforemightbenotmatureenoughtobeexhaustively

standardised.Inaddition,existingstandardsseemnottoaddressspecificaspectssuchasthe

traceabilityandlineageofbothdataandAIcomponents,ormetricson,forexample,

robustness.

Thereportalsolooksbeyondthemereprotectionofassets,ascybersecuritycanbeconsidered

asinstrumentaltothecorrectimplementationoftrustworthinessfeaturesofAIand–conversely

–thecorrectimplementationoftrustworthinessfeaturesiskeytoensuringcybersecurity.Inthis

context,itisnotedthatthereisariskthattrustworthinessishandledseparatelywithinAI-

specificandcybersecurity-specificstandardisationinitiatives.Oneexampleofanareawhere

thismighthappenisconformityassessment.

Lastbutnotleast,thereportcomplementstheobservationsabovebyextendingtheanalysisto

thedraftAIAct.Firstly,thereportstressestheimportanceoftheinclusionofcybersecurity

aspectsintheriskassessmentofhigh-risksystemsinordertodeterminethecybersecurityrisks

thatarespecifictotheintendeduseofeachsystem.Secondly,thereporthighlightsthelackof

standardscoveringthecompetencesandtoolsoftheactorsperformingconformity

assessments.Thirdly,itnotesthatthegovernancesystemsdrawnupbythedraftAIActandthe

7

CybersecurityAct(CSA)1shouldworkinharmonytoavoidduplicationofeffortsatnational

level.

Finally,thereportconcludesthatsomestandardisationgapsmightbecomeapparentonlyas

theAItechnologiesadvanceandwithfurtherstudyofhowstandardisationcansupport

cybersecurity.

1Regulation(EU)2019/881oftheEuropeanParliamentandoftheCouncilof17April2019onENISA(theEuropeanUnion

AgencyforCybersecurity)andoninformationandcommunicationstechnologycybersecuritycertificationandrepealing

Regulation(EU)No526/2013(CybersecurityAct)(https://eur-lex.europa.eu/eli/reg/2019/881/oj).

8

1.INTRODUCTION

1.1DOCUMENTPURPOSEANDOBJECTIVES

Theoverallobjectiveofthepresentdocumentistoprovideanoverviewofstandards(existing,

beingdrafted,underconsiderationandplanned)relatedtothecybersecurityofartificial

intelligence(AI),assesstheircoverageandidentifygapsinstandardisation.Thereportis

intendedtocontributetotheactivitiespreparatorytotheimplementationoftheproposedEU

regulationlayingdownharmonisedrulesonartificialintelligence(COM(2021)206final)(the

draftAIAct)onaspectsrelevanttocybersecurity.

1.2TARGETAUDIENCEANDPREREQUISITES

Thetargetaudienceofthisreportincludesanumberofdifferentstakeholdersthatare

concernedbythecybersecurityofAIandstandardisation.

Theprimaryaddresseesofthisreportarestandards-developingorganisations(SDOs)and

publicsector/governmentbodiesdealingwiththeregulationofAItechnologies.

Theambitionofthereportistobeausefultoolthatcaninformabroadersetofstakeholdersof

theroleofstandardsinhelpingtoaddresscybersecurityissues,inparticular:

?academiaandtheresearchcommunity;

?theAItechnicalcommunity,AIcybersecurityexpertsandAIexperts(designers,developers,machinelearning(ML)experts,datascientists,etc.)withaninterestindevelopingsecure

solutionsandinintegratingsecurityandprivacybydesignintheirsolutions;

?businesses(includingsmallandmedium-sizedenterprises)thatmakeuseofAIsolutionsand/orareengagedincybersecurity,includingoperatorsofessentialservices.

Thereaderisexpectedtohaveadegreeoffamiliaritywithsoftwaredevelopmentandwiththe

confidentiality,integrityandavailability(CIA)securitymodel,andwiththetechniquesofboth

vulnerabilityanalysisandriskanalysis.

1.3STRUCTUREOFTHESTUDY

Thereportisstructuredasfollows:

?definitionoftheperimeteroftheanalysis(Chapter

2)

:introductiontotheconceptsofAIandcybersecurityofAI;

?inventoryofstandardisationactivitiesrelevanttothecybersecurityofAI(Chapter

3)

:

overviewofstandardisationactivities(bothAI-specificandnon-AIspecific)supportingthe

cybersecurityofAI;

?analysisofcoverage(Chapter

4)

:analysisofthecoverageofthemostrelevantstandards

identifiedinChapter3withrespecttotheCIAsecuritymodelandtotrustworthiness

characteristicssupportingcybersecurity;

?wrap-upandconclusions(Chapter

5)

:buildingontheprevioussections,recommendations

onactionstoensurestandardisationsupporttothecybersecurityofAI,andonpreparationfor

theimplementationofthedraftAIAct.

9

2.SCOPEOFTHEREPORT:

DEFINITIONOFAIAND

CYBERSECURITYOFAI

2.1ARTIFICIALINTELLIGENCE

UnderstandingAIanditsscopeseemstobetheveryfirststeptowardsdefiningcybersecurityof

AI.Still,acleardefinitionandscopeofAIhaveproventobeelusive.TheconceptofAIis

evolvingandthedebateoverwhatitis,andwhatitisnot,isstilllargelyunresolved–partlydue

totheinfluenceofmarketingbehindtheterm‘AI’.Evenatthescientificlevel,theexactscopeof

AIremainsverycontroversial.Inthiscontext,numerousforumshaveadopted/proposed

definitionsofAI.2

Box1:Example–DefinitionofAI,asincludedinthedraftAIAct

Initsdraftversion,theAIActproposesadefinitioninArticle3(1):

‘a(chǎn)rtificialintelligencesystem’(AIsystem)meanssoftwarethatisdevelopedwithoneormoreofthetechniquesandapproacheslistedinAnnexIandcan,foragivensetofhuman-defined

objectives,generateoutputssuchascontent,predictions,recommendations,ordecisions

influencingtheenvironmentstheyinteractwith.ThetechniquesandapproachesreferredtoinAnnexIare:

?Machinelearningapproaches,includingsupervised,unsupervisedandreinforcementlearning,usingawidevarietyofmethodsincludingdeeplearning;

?logic-andknowledge-basedapproaches,includingknowledgerepresentation,inductive(logic)programming,knowledgebases,inferenceanddeductiveengines,(symbolic)reasoningandexpertsystems;

?statisticalapproaches,Bayesianestimation,searchandoptimisationmethods

InlinewithpreviousENISAwork,whichconsidersitthedrivingforceintermsofAI

technologies,thereportmainlyfocusesonML.Thischoiceisfurthersupportedbythefactthat

thereseemtobeageneralconsensusonthefactthatMLtechniquesarepredominantin

currentAIapplications.Lastbutnotleast,itisconsideredthatthespecificitiesofMLresultin

vulnerabilitiesthataffectthecybersecurityofAIinadistinctivemanner.Itistobenotedthatthe

reportconsidersAIfromalifecycleperspective3.ConsiderationsconcerningMLonlyhavebeen

flagged.

2Forexample,theUnitedNationsEducational,ScientificandCulturalOrganization(UNESCO)inthe‘Firstdraftofthe

recommendationontheethicsofartificialintelligence’,andtheEuropeanCommission’sHigh-LevelExpertGroupon

ArtificialIntelligence.

3SeethelifecycleapproachportrayedintheENISAreportSecuringMachineLearningAlgorithms

(https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms).

10

Box2:Specificitiesofmachinelearning–examplesfromasupervisedlearningmodel4

MLsystemscannotachieve100%inbothprecisionandrecall.Dependingonthesituation,MLneedstotradeoffprecisionforrecallandviceversa.ItmeansthatAIsystemswill,onceinawhile,makewrong

predictions.ThisisallthemoreimportantbecauseitisstilldifficulttounderstandwhentheAIsystemwillfail,butitwilleventually.

ThisisoneofthereasonsfortheneedforexplainabilityofAIsystems.Inessence,algorithmsare

deemedtobeexplainableifthedecisionstheymakecanbeunderstoodbyahuman(e.g.,adeveloperoranauditor)andthenexplainedtoanenduser(ENISA,SecuringMachineLearningAlgorithms).

AmajorspecificcharacteristicofMListhatitreliesontheuseoflargeamountsofdatatodevelop

MLmodels.Manuallycontrollingthequalityofthedatacanthenbecomeimpossible.Specifictraceabilityordataqualityproceduresneedtobeputinplacetoensurethat,tothegreatestextentpossible,thedatabeinguseddonotcontainbiases(e.g.forgettingtoincludefacesofpeoplewithspecifictraits),havenotbeen

deliberatelypoisoned(e.g.addingdatatomodifytheoutcomeofthemodel)andhavenotbeendeliberatelyorunintentionallymislabelled(e.g.apictureofadoglabelledasawolf).

2.2CYBERSECURITYOFAI

AIandcybersecurityhavebeenwidelyaddressedbytheliteraturebothseparatelyandin

combination.TheENISAreportSecuringMachineLearningAlgorithms5describesthe

multidimensionalrelationshipbetweenAIandcybersecurity,andidentifiesthreedimensions:

?cybersecurityofAI:lackofrobustnessandthevulnerabilitiesofAImodelsandalgorithms,

?AItosupportcybersecurity:AIusedasatool/meanstocreateadvancedcybersecurity(e.g.,bydevelopingmoreeffectivesecuritycontrols)andtofacilitatetheeffortsoflawenforcementandotherpublicauthoritiestobetterrespondtocybercrime,

?malicioususeofAI:malicious/adversarialuseofAItocreatemoresophisticatedtypesofattacks.

Thecurrentreportfocusesonthefirstofthesedimensions,namelythecybersecurityofAI.Still,

therearedifferentinterpretationsofthecybersecurityofAIthatcouldbeenvisaged:

?anarrowandtraditionalscope,intendedasprotectionagainstattacksontheconfidentiality,integrityandavailabilityofassets(AIcomponents,andassociateddataandprocesses)

acrossthelifecycleofanAIsystem,

?abroadandextendedscope,supportingandcomplementingthenarrowscopewith

trustworthinessfeaturessuchasdataquality,oversight,robustness,accuracy,explainability,transparencyandtraceability.

Thereportadoptsanarrowinterpretationofcybersecurity,butitalsoincludesconsiderations

aboutthecybersecurityofAIfromabroaderandextendedperspective.Thereasonisthatlinks

betweencybersecurityandtrustworthinessarecomplexandcannotbeignored:the

requirementsoftrustworthinesscomplementandsometimesoverlapwiththoseofAI

cybersecurityinensuringproperfunctioning.Asanexample,oversightisnecessarynotonlyfor

thegeneralmonitoringofanAIsysteminacomplexenvironment,butalsotodetectabnormal

behavioursduetocyberattacks.Inthesameway,adataqualityprocess(includingdata

traceability)isanaddedvaluealongsidepuredataprotectionfromcyberattack.Hence,

4Besidestheonesmentionedinthebox,the‘FalseNegativeRate”andthe‘FalsePositiveRate”andthe‘Fmeasure”are

examplesofotherrelevantmetrics.

5https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms

11

trustworthinessfeaturessuchasrobustness,oversight,accuracy,traceability,explainabilityand

transparencyinherentlysupportandcomplementcybersecurity.

12

3.STANDARDISATIONIN

SUPPORTOF

CYBERSECURITYOFAI

3.1RELEVANTACTIVITIESBYTHEMAINSTANDARDS-DEVELOPING

ORGANISATIONS

ItisrecognisedthatmanySDOsarelookingatAIandpreparingguidesandstandardisation

deliverablestoaddressAI.Therationaleformuchofthisworkisthatwheneversomethingnew

(inthisinstanceAI)isdevelopedthereisabroadrequirementtoidentifyifexistingprovisions

applytothenewdomainandhow.Suchstudiesmayhelptounderstandthenatureofthenew

andtodetermineifthenewissufficientlydivergentfromwhathasgonebeforetojustify,or

require,thedevelopmentandapplicationofnewtechniques.Theycouldalsogivedetailed

guidanceontheapplicationofexistingtechniquestothenew,ordefineadditionaltechniquesto

fillthegaps.

Still,inthescopeofthisreport,thefocusismainlyonstandardsthatcanbeharmonised.This

limitsthescopeofanalysistothoseoftheInternationalOrganizationforStandardization(ISO)

andInternationalElectrotechnicalCommission(IEC),theEuropeanCommitteefor

Standardization(CEN)andEuropeanCommitteeforElectrotechnicalStandardization

(CENELEC),andtheEuropeanTelecommunicationsStandardsInstitute(ETSI).CENand

CENELECmaytransposestandardsfromISOandIEC,respectively,toEUstandardsunderthe

auspicesof,respectively,theViennaandFrankfurtagreements.

3.1.1CEN-CENELEC

CEN-CENELECaddressesAIandCybersecuritymainlywithintwojointtechnicalcommittees

(JTCs).

?JTC13‘Cybersecurityanddataprotection’hasasitsprimaryobjectivetotransposerelevantinternationalstandards(especiallyfromISO/IECJTC1subcommittee(SC)27)asEuropeanstandards(ENs)intheinformationtechnology(IT)domain.Italsodevelops‘homegrown’ENs,wheregapsexist,insupportofEUdirectivesandregulations.

?JTC21‘Artificialintelligence’isresponsibleforthedevelopmentandadoptionofstandardsforAIandrelateddata(especiallyfromISO/IECJTC1SC42),andprovidingguidancetoother

technicalcommitteesconcernedwithAI.

JTC13addresseswhatisdescribedasthenarrowscopeofcybersecurity(seeSection2.2).

ThecommitteehasidentifiedalistofstandardsfromISO-IECthatareofinterestforAI

cybersecurityandmightbeadopted/adaptedbyCEN-CENELECbasedontheirtechnical

cooperationagreement.ThemostprominentidentifiedstandardsbelongtotheISO27000

seriesoninformationsecuritymanagementsystems,whichmaybecomplementedbytheISO

15408seriesforthedevelopment,evaluationand/orprocurementofITproductswithsecurity

functionality,aswellassector-specificguidance,e.g.ISO/IEC27019:2017Information

technology–Securitytechniques–Informationsecuritycontrolsfortheenergyutilityindustry

(seetheannex

A.1,

forthefulllistofrelevantISO27000seriesstandardsthathavebeen

identifiedbyCEN-CENELEC).

13

Inaddition,thefollowingguidanceandusecasedocumentsaredraftsunderdevelopment

(someataveryearlystage)andexploreAImorespecifically.Itisprematuretoevaluatethe

impactsofthesestandards.

?ISO/IECAWI27090,Cybersecurity–Artificialintelligence–Guidanceforaddressingsecuritythreatsandfailuresinartificialintelligencesystems:ThedocumentaimstoprovideinformationtoorganisationstohelpthembetterunderstandtheconsequencesofsecuritythreatstoAI

systems,throughouttheirlifecycles,anddescribeshowtodetectandmitigatesuchthreats.Thedocumentisatthepreparatorystage.

?ISO/IECCDTR27563,Cybersecurity–ArtificialIntelligence–Impactofsecurityandprivacyinartificialintelligenceusecases:Thedocumentisatthecommitteestage.

Bydesign,JTC21isaddressingtheextendedscopeofcybersecurity(seeSection

4.2)

,which

includestrustworthinesscharacteristics,dataquality,AIgovernance,AImanagementsystems,

etc.Giventhis,afirstlistofISO-IEC/SC42standardshasbeenidentifiedashavingdirect

applicabilitytothedraftAIActandisbeingconsideredforadoption/adaptionbyJTC21:

?ISO/IEC22989:2022,Artificialintelligenceconceptsandterminology(published),

?ISO/IEC23053:2022,Frameworkforartificialintelligence(AI)systemsusingmachinelearning(ML)(published),

?ISO/IECDIS42001,AImanagementsystem(underdevelopment),

?ISO/IEC23894,GuidanceonAIriskmanagement(publicationpending),

?ISO/IECTS4213,Assessmentofmachinelearningclassificationperformance(published),

?ISO/IECFDIS24029-2,Methodologyfortheuseofformalmethods(underdevelopment),

?ISO/IECCD5259series:DataqualityforanalyticsandML(underdevelopment).

Inaddition,JTC21hasidentifiedtwogapsandhaslaunchedaccordinglytwoadhocgroups

withtheambitionofpreparingnewworkitemproposals(NWIPs)supportingthedraftAIAct.

Thepotentialfuturestandardsare:

?AIsystemsriskcatalogueandriskmanagement,

?AItrustworthinesscharacterisation(e.g.,robustness,accuracy,safety,explainability,transparencyandtraceability).

Finally,ithasbeendeterminedthatISO-IEC42001onAImanagementsystemsandISO-IEC

27001oncybersecuritymanagementsystemsmaybecomplementedbyISO9001onquality

managementsystemsinordertohavepropercoverageofAIanddataqualitymanagement.

3.1.2ETSI

ETSIhassetupadedicatedOperationalCo-ordinationGrouponArtificialIntelligence,which

coordinatesthestandardisationactivitiesrelatedtoAIthatarehandledinthetechnicalbodies,

committeesandindustryspecificationgroups(ISGs)ofETSI.Inaddition,ETSIhasaspecific

grouponthesecurityofAI(SAI)thathasbeenactivesince2019indevelopingreportsthatgive

amoredetailedunderstandingoftheproblemsthatA

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論