![數(shù)據(jù)與模型安全 課件 第9周:隱私攻擊和防御_第1頁(yè)](http://file4.renrendoc.com/view14/M06/2E/34/wKhkGWcJ9CCAM8MXAAD3JFkC4K0349.jpg)
![數(shù)據(jù)與模型安全 課件 第9周:隱私攻擊和防御_第2頁(yè)](http://file4.renrendoc.com/view14/M06/2E/34/wKhkGWcJ9CCAM8MXAAD3JFkC4K03492.jpg)
![數(shù)據(jù)與模型安全 課件 第9周:隱私攻擊和防御_第3頁(yè)](http://file4.renrendoc.com/view14/M06/2E/34/wKhkGWcJ9CCAM8MXAAD3JFkC4K03493.jpg)
![數(shù)據(jù)與模型安全 課件 第9周:隱私攻擊和防御_第4頁(yè)](http://file4.renrendoc.com/view14/M06/2E/34/wKhkGWcJ9CCAM8MXAAD3JFkC4K03494.jpg)
![數(shù)據(jù)與模型安全 課件 第9周:隱私攻擊和防御_第5頁(yè)](http://file4.renrendoc.com/view14/M06/2E/34/wKhkGWcJ9CCAM8MXAAD3JFkC4K03495.jpg)
版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
PrivacyAttacks
&
Defenses姜育剛,馬興軍,吳祖煊SalvadorDalí,“ThePersistenceofMemory,”1931Recap:
week
8Data
Extraction
Attack
&
DefenseModel
Stealing
AttackFuture
ResearchThis
WeekMembership
Inference
AttackDifferential
PrivacyMembership
Inference
AttackDifferential
PrivacyMembership
Inference
AttackMembership
Inference
Attack推理一個(gè)輸入樣本是否存在于訓(xùn)練數(shù)據(jù)集中Shokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.存在?Privacy
and
Ethical
Problems
MIA
could
cause
the
following
harms:Leak
private
info:
someone
has
been
to
some
place
or
having
an
unspeakable
illness
Expose
info
about
the
training
dataMIA
sensitivity
also
indicates
data
leakage
riskAn
Early
WorkHomer,Nils,etal."ResolvingindividualscontributingtraceamountsofDNAtohighlycomplexmixturesusinghigh-densitySNPgenotypingmicroarrays."
PLoSgenetics
4.8(2008):e1000167.判斷個(gè)人基因是否出現(xiàn)在一個(gè)復(fù)雜的混合基因里可用于調(diào)查取證MIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.0Black-box
attack
pipelineNeeds
probability
vectorMIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.Train
k
shadow
models
on
disjoint
datasetsSample
a
number
of
subsets
from
DTrain
a
model
on
each
of
the
subsetTake
one
model
as
the
targetTake
the
rest
models
as
shadow
modelsMIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.Different
ways
to
get
the
training
data:Random
SynthesisData
synthesisPhase
1:
searching
for
high
confidence
data
points
in
the
data
spacePhase
2:
samplesyntheticdatafromthesepointsRepeat
the
above
for
each
class
cPhase
1:每次只改變已找到的高置信度樣本的k個(gè)特征MIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.Statistics-basedsynthesisPrior
knowledge:The
marginal
distribution
w.r.t.
each
classPhase
1:
sample
according
to
the
statisticsMIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.We
could
also
assume
the
attacker
can
access
NoisyRealdata:
real
but
noisyVery
similar
to
the
real
datasetBut
with
a
few
features
(10%
or
20%)
are
randomly
resetMIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.Finally:
training
the
inference
model”in”:
in
the
training
set”out”:
:
in
the
test
setTrain
the
inference
model
with
dataset:
(prob1,
”in”),
(prob2,
”in”),
(prob3,
”out”)
(prob4,
”out”)MIA:The
Most
Well-known
WorkShokri,Reza,etal."Membershipinferenceattacksagainstmachinelearningmodels."
S&P,2017.How
well
can
MIA
perform?數(shù)據(jù)集:CIFAR-10、CIFAR-100、Purchases、Locations、Texashospitalstays、MNIST、UCIAdult(CensusIncome).White-box
MIANasr
et
al.“Comprehensiveprivacyanalysisofdeeplearning:Passiveandactivewhite-boxinferenceattacksagainstcentralizedandfederatedlearning.”
S&P,2019.
Hu,Hongsheng,etal."Membershipinferenceattacksonmachinelearning:Asurvey."
ACMComputingSurveys(CSUR)
54.11s(2022):1-37.White-boxvs
Black-boxWhite-box
MIANasr
et
al."Comprehensiveprivacyanalysisofdeeplearning:Passiveandactivewhite-boxinferenceattacksagainstcentralizedandfederatedlearning."
S&P,2019.抽取特征:概率、中間層激活、梯度無(wú)監(jiān)督設(shè)置下的重構(gòu)損失推理結(jié)果Limitations
of
MIAConstructing
shadow
modelsAssuming
access
to
some
data
or
prior
knowledgeOverfitting
is
a
mustLimited
to
classification
modelsLimited
to
small
modelsAddressing
Limitations
of
MIASalemetal."ML-Leaks:ModelandDataIndependentMembershipInferenceAttacksandDefensesonMachineLearningModels."
NDSS,2019.Model
and
Data
Independent
MIAAddressing
Limitations
of
MIALong,Yunhui,etal."Apragmaticapproachtomembershipinferencesonmachinelearningmodels."
EuroS&P,2020.Attacking
non-overfitting
DNNsFocusing
on
minimizingfalsepositives目標(biāo)問(wèn)題:樣本A/B在哪個(gè)模型的訓(xùn)練數(shù)據(jù)里?Addressing
Limitations
of
MIALeino
&
Fredrikson."StolenMemories:LeveragingModelMemorizationforCalibratedWhite-BoxMembershipInference."
USENIXSecurity,2020.More
practical
white-box
threat
modelThe
adversary
only
knows
the
model
but
not
the
data
distribution利用詭異的獨(dú)家記憶進(jìn)行成員推理Training
imagesInternal
explanations
Pink
background
explanation
of
Tony
BlairAddressing
Limitations
of
MIAHayes,Jamie,etal."Logan:Membershipinferenceattacksagainstgenerativemodels."
arXivpreprintarXiv:1705.07663
(2017).Extension
to
generative
models充分利用判別器的判別能力:高置信度的大概率來(lái)自原始訓(xùn)練數(shù)據(jù)集Metric-guided
MIAYeom,Samuel,etal.“Privacyriskinmachinelearning:Analyzingtheconnectiontooverfitting.”
CSF,
2018.
Salemetal."ML-Leaks:ModelandDataIndependentMembershipInferenceAttacksandDefensesonMachineLearningModels."
NDSS,2019.Metric
based
Anomaly
detection預(yù)測(cè)正確性:預(yù)測(cè)正確的就是成員預(yù)測(cè)損失:高于訓(xùn)練樣本平均損失的是成員預(yù)測(cè)置信度:有概率接近1的是成員預(yù)測(cè)熵:低概率熵的是成員修正預(yù)測(cè)熵:不同類別區(qū)別考慮A
Summary
of
Existing
MIAsUsed
DatasetsImage:CIFAR-10,CIFAR-100,MNIST,Fashion-MNIST,YaleFace,ChestX-ray8,SVHN,CelebA,ImageNetTabulate:Adult,Foursquare,Purchase-100,Texas100,Location,etc.Audio:LibriSpeech,TIMIT,TED
Text:Weibo,TweetEmoInt,SATED,Dislogs,Redditcomments,Cora,
Pubmed,CitesserHu,Hongsheng,etal.“Membershipinferenceattacksonmachinelearning:A
survey.”
ACMComputingSurveys,
2022.A
Summary
of
Existing
MIAsTargetmodels:Onimage:Multi-layerCNN+1or2FC(>5papersused2-4layersCNN)Alexnet,ResNet18,ResNet50,VGG16,VGG19,DenseNet121,Efficient-netv2,EfficientNetB0GAN:InfoGAN,PGGAN,WGANGP,DCGAN,MEDGAN,andVAEGANOntabulate
data:FConlymodelsOntext:Multi-layerCNN,multi-layerRNN/LSTM,
transformers(e.g.,BERT,GPT-2)Onaudio:Hybridsystem:HMM-DNNmodelEnd-to-end:Multi-layerLSTM/RNN/GRUMLaaS(Online):GooglePredictionAPI,AmazonMLMembership
Inference
AttackDifferential
PrivacyDifferential
PrivacyFinite
Difference
and
Derivativeh
tends
to
be
small(zero)通過(guò)函數(shù)在某一點(diǎn)隨微小擾動(dòng)的變化可以估計(jì)在這一點(diǎn)的梯度如果對(duì)數(shù)據(jù)集進(jìn)行微小擾動(dòng)呢?Differential
PrivacyFinite
Difference
->
Differential
Privacy數(shù)據(jù)集的微小變化會(huì)導(dǎo)致多大的算法輸出變化?
函數(shù)
輸入值
Differential
Privacy
數(shù)據(jù)集的微小變化會(huì)導(dǎo)致多大的算法輸出變化?
Differential
Privacy
Dwork,Cynthia."Differentialprivacy:Asurveyofresults."
ICTAMC,Heidelberg,2008.Properties
of
DPMcSherry,FrankD.“Privacyintegratedqueries:anextensibleplatformforprivacy-preservingdataanalysis.”
ACM
SIGMOD,2009.How
to
Obtain
a
Differentially
Private
Model?思考:如何讓自己的聲音不被發(fā)現(xiàn)??Measuring
SensitivityNissimandAdam.“Smoothsensitivityandsamplinginprivatedataanalysis.”
STOC,2007.Noise
Models幾種噪聲添加機(jī)制拉普拉斯機(jī)制(Laplacian)高斯機(jī)制(Gaussian)指數(shù)機(jī)制:離散->
概率;確定->不確定The
Laplace
Mechanism拉普拉斯機(jī)制(Laplace
Mechanism)
SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyThe
Laplace
Mechanism拉普拉斯機(jī)制(Laplace
Mechanism)
SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyThe
Laplace
Mechanism
SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyLaplace
vs.
Gaussian
SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyDP
+
Deep
Learning問(wèn)題:在哪里添加噪聲?輸入空間模型空間輸出空間輸入空間DP差分隱私預(yù)處理訓(xùn)練數(shù)據(jù)dp-GAN
pipelineZhang
et
al.“Differentiallyprivatereleasingviadeepgenerativemodel(technicalreport).”
arXiv:1801.01594
(2018).輸入空間DP隨機(jī)平滑Randomized
Smoothing隨機(jī)平滑:可驗(yàn)證對(duì)抗防御Cohen,Jeremy,ElanRosenfeld,andZicoKolter."Certifiedadversarialrobustnessviarandomizedsmoothing."
ICML,2019.用隨機(jī)噪聲填充輸入空間,得到對(duì)抗魯棒性邊界模型空間DPAbadi,Martin,etal.“Deeplearningwithdifferentialprivacy.”
CCS,
2016.差分隱私平滑模型參數(shù):DP-SGD算法DP-SGD性能SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyDP-SGD性能SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyDP-SGD性能SaTML2023-GautamKamath-AnIntroductiontoDifferentialPrivacyMore
Practical
Solution?1:
Training
on
public
dat
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 統(tǒng)編版八年級(jí)歷史上冊(cè)《第6課 戊戌變法》聽(tīng)課評(píng)課記錄
- 湘教版數(shù)學(xué)九年級(jí)上冊(cè)4.4《解直角三角形的應(yīng)用》聽(tīng)評(píng)課記錄2
- 瓦匠施工安全責(zé)任協(xié)議書(shū)(2篇)
- 生活技能培訓(xùn)服務(wù)合同(2篇)
- 粵人版地理七年級(jí)上冊(cè)《第三節(jié) 世界的主要?dú)夂蝾愋汀仿?tīng)課評(píng)課記錄1
- 北京課改版歷史七年級(jí)下冊(cè)第9課《經(jīng)濟(jì)重心的南移》聽(tīng)課評(píng)課記錄
- 五年級(jí)下冊(cè)數(shù)學(xué)聽(tīng)評(píng)課記錄《 -2、5倍數(shù) 》人教版
- 人教版數(shù)學(xué)七年級(jí)上冊(cè)4.4《課題學(xué)習(xí) 設(shè)計(jì)制作長(zhǎng)方體形狀的包裝紙盒》聽(tīng)評(píng)課記錄2
- 人教版七年級(jí)數(shù)學(xué)下冊(cè) 聽(tīng)評(píng)課記錄 9.2 第1課時(shí)《一元一次不等式》
- 人教版數(shù)學(xué)八年級(jí)下冊(cè)16.2第2課時(shí)《 二次根式的除法》聽(tīng)評(píng)課記錄
- 2024-2030年中國(guó)大宗商品行業(yè)市場(chǎng)深度調(diào)研及發(fā)展趨勢(shì)與投資前景研究報(bào)告
- 強(qiáng)化提升1解三角形中的三線問(wèn)題(解析)
- 一年級(jí)二年級(jí)奧數(shù)暑期培優(yōu)題庫(kù)
- 室內(nèi)裝飾拆除專項(xiàng)施工方案
- 老年癡呆癥患者生活陪護(hù)協(xié)議
- 2024年-急診氣道管理共識(shí)課件
- 鋼筋工程精細(xì)化管理指南(中建內(nèi)部)
- 小學(xué)語(yǔ)文中段整本書(shū)閱讀的指導(dǎo)策略研究 中期報(bào)告
- 浙教版2023-2024學(xué)年數(shù)學(xué)八年級(jí)上冊(cè)期末復(fù)習(xí)卷(含答案)
- 2024年中國(guó)鐵路投資集團(tuán)有限公司招聘筆試參考題庫(kù)含答案解析
- 運(yùn)動(dòng)訓(xùn)練與康復(fù)治療培訓(xùn)資料
評(píng)論
0/150
提交評(píng)論