분류
2024년 8월
작성일
2024.04.09
수정일
2024.04.09
작성자
신진명
조회수
24

Task-Specific Differential Private Data Publish Method for Privacy-Preserving Deep Learning

According to recent advances in deep neural network, deep neural networks are widely applied in various applications such as advertise, financial and medical fields to provide personalized service. To develop deep neural network models for personalized service many institutions are collecting large datasets including sensitive information and using them to train models. However, the data memorization effect of deep neural network, a phenomenon that a deep neural network model remembers information which is not necessary for specific task, leads many malicious users to target the sensitive information that is memorized in deep neural networks. To handle the infor- mation leakage caused by such phenomenon, two representative mechanisms are widely studied, which called homomorphic encryption and differential privacy. This dissertation shows the limi- tation of homomorphic encryption-based privacy-preserving mechanisms and proposes two new differential privacy-based privacy-preserving methods in each chapter as follows:


1. AdaptiveDifferentialPrivacyMethodforStructuredData:Instructureddata,anonymiza- tion techniques are widely used because of it’s intuitive characteristics and low additional computation resource requirement. However, many studies showed that deep neural network models using anonymization techniques are vulnerable to various privacy attacks targeting sensitive information. Different from anonymization techniques, the security of differential privacy is fully proofed mathematically and the performance of deep neural network applied differential privacy is not degraded so much. But, since the performance degradation of dif- ferential privacy-based deep neural network cannot be bounded mathematically, differential privacy results exceptional performance degradation in specific task according to parameter settings. To handle such problem, adaptive differential privacy method for structured data is proposed in this chapter. The main idea of proposed adaptive differential privacy method is calibrating the amount and distribution of random noise in differential privacy according to the feature importance for the specific task. To achieve automotive feature importance- based noise calibrating according to specific task, the explainable artificial intelligent ex- tracts feature importance and such importance is modified to calibrating noise magnitude. In experiments, the feasibility of proposed method is shown through data utility comparison, resistance against privacy attack and performance variation according to privacy parameter.


2. DifferentialPrivateImageDe-IdentificationMethodforDeepLearning-basedService: Since the characteristics that no restrictions on input data type, a simple differential privacy- based privacy-preserving deep learning method named differential private stochastic gradi- ent descent is widely adapted. However, recent research on privacy attack that targeting dif- ferential private stochastic gradient descent-based deep learning model showed such method can be exploited easily. Different from such model modification-based privacy-preserving deep learning, the data modification-based privacy-preserving, which is adding noise into data directly, is relatively secure from privacy attacks on deep learning models. At the same time, many researchers endeavored to modify input data using differential privacy mecha- nism for structured data. However, only few researches for unstructured data, e.g. image, proposed differential privacy-based input data modification methods for specific tasks. To handle such limitation, this chapter proposes an differential private image de-identification method. The key idea of the proposed method is adding important features for deep learn- ing model into noised unrecognizable image. Thus, human cannot recognize the content of image, but the deep neural network can recognize and analysis the content of noise image. Also, to automate the important feature extraction, the feature importance of explainable artificial intelligent is applied. Additionally, the service architecture and simple protocol for service time are described.


 Two privacy-preserving methods described above in this dissertation provide resistance against state-of-art privacy attacks targeting deep neural networks. Therefore, the sensitive information in personalized deep neural network-based service can be secure.

학위연월
2024년 8월
지도교수
최윤호
키워드
소개 웹페이지
https://sinryang.github.io/dissertation/
첨부파일
첨부파일이(가) 없습니다.
다음글
다음글이(가) 없습니다.
이전글
Advanced Defense Framework against Physical Adversarial Camouflage via Continual Adversarial Training
김용수 2024-04-08 09:38:57.897
RSS 2.0 116
게시물 검색
박사학위논문
번호 제목 작성자 작성일 첨부파일 조회수
116 Task-Specific Differential Private Data Publish Me 신진명 2024.04.09 0 25
115 Advanced Defense Framework against Physical Advers 김용수 2024.04.08 0 36
114 한글 채팅 텍스트 기반의 저자 검증 모형과 그 응용 이다영 2024.04.05 0 35
113 상태 기반 테스트 시나리오 보강 방법 이선열 2023.10.17 0 135
112 Manufacturing Testing Automation FrameworkBased on 강효은 2023.10.17 0 151
111 Synthesizing Robust Physical Camouflage for Univer 수랸토 나우팔 2023.10.16 0 152
110 복잡도 다양성을 고려한 C 프로그램의 시험 용이성 예측 모형 구축 방법 최현재 2023.10.16 0 124
109 Design and Optimization of Quantum Arithmetic Circ 라라사티 하라스타 타티마 2023.10.13 0 153
108 Improving 6TiSCH Network Formation and Transmissio 파와즈 자키 자키얄 2023.10.10 0 145
107 저지연 고신뢰 운전자 프로파일링을 위한 딥러닝 모델 및 조기 종료 기법 임재봉 2023.10.08 0 190
106 802.11ax 대규모 Wi-Fi 환경의 심층 생성 모델을 활용한 트래픽 모델링 및 AP 이재민 2023.04.07 0 117
105 뉴런 클러스터를 활용한 합성곱 신경망 이미지 분류 신뢰성 향상 방법 이영우 2023.04.06 0 109
104 Trust Guard Extension Framework for Enhanced Secur 김해용 2023.04.06 0 88
103 노이즈 오염 하에서의 효율적 최적화를 위한 확률적 평가 샘플 누적 전략 김정민 2023.04.06 1 117
102 LPWAN의 규모 확장성과 서비스 커버리지 향상을 위한 충돌 제어 및 신호 합성 기법 허준환 2022.10.13 0 115
101 DQN 기반 자동화 컨테이너 터미널 장치장 크레인 작업 할당 전략 최적화 김세영 2022.10.13 0 125
100 Robust Defense Techniques against Adversarial Exam 최석환 2022.04.05 0 121
99 High-Performance Hardware Architectures for Ellipt 아와루딘 에셉 무하마드 2022.04.01 0 92
98 한국어 자연어처리를 위한 뉴로-심볼릭 모델 김민호 2021.10.14 0 128
97 Automatic Assessment and Collaborative Mentoring S 류샤오 2021.10.13 0 131