Chin-Yuan Yeh

I am a fourth-year PhD student at National Taiwan University. My research focus on deepfake, adversarial attack, and data mining.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github

profile photo
Publications
Disrupting Image-Translation-Based DeepFake Algorithms with Adversarial Attacks
Chin-Yuan Yeh, Hsi-Wen Chen, Shang-Lun Tsai, Sheng-De Wang
WACV Deepfakes and Presentation Attacks in Biometrics workshop, 2020
source code

I am the first to introduce adversarial attack strategies to incapacitate Deepfake models, i.e., image translation GANs (CycleGAN, pix2pix, and pix2pixHD). I develop two attacks: Nullifying Attack which minimizes Deepfake modification, and Distorting Attack which causes distortion to deepfake outputs. Includes case studies on repeated inference, ensemble attack.

Attack As the Best Defense: Nullifying Image-to-Image Translation GANs via Limit-Aware Adversarial Attack
Chin-Yuan Yeh, Hsi-Wen Chen, Hong-Han Shuai, De-Nian Yang, Ming-Syan Chen
ICCV 2021
arXiv / video / source code

I develop Limit-Aware Self-Guiding Gradient Sliding Attack (LaSGSA), a query-based black-box norm-bounded adversarial attack against Img2Img GANs (potentially used as deepfakes) with three optimization acceleration techniques: limit-aware RGF which restricts query sampling within the ε-bound, gradient sliding mechanism that propagates after being clipped by the ε-bound, and self-guiding prior, which leverages the semantic consistency of Img2Img GANs causing the Jacobian matrix of its mapping to be diagonal.

Planning Data Poisoning Attacks on Heterogeneous Recommender Systems in a Multiplayer Setting
Chin-Yuan Yeh, Hsi-Wen Chen, De-Nian Yang, Wang-Chien Lee, Philip S. Yu, Ming-Syan Chen
ICDE 2023
source code

I develop Multilevel Stackelberg Optimization over Progressive Differentiable Surrogate (MSOPDS), a data poisoning technique against Heterogeneous RecSys, addressing the scenario of multiple attackers poisoning the same Recommendation System, where the first attacker aims to prevent subsequent attackers from harming his poisoning objective. MSOPDS leverages Stackelberg Game analysis between the first attacker the subsequent attackers’ actions and projection techniques to navigate the gradient descent over discrete RecSys operations.

Does Audio Deepfake Detection Rely on Artifacts?
Tsu-Hsien Shih, Chin-Yuan Yeh, Ming-Syan Chen
ICASSP 2024

I develop Multilevel Stackelberg Optimization over Progressive Differentiable Surrogate (MSOPDS), a data poisoning technique against Heterogeneous RecSys, addressing the scenario of multiple attackers poisoning the same Recommendation System, where the first attacker aims to prevent subsequent attackers from harming his poisoning objective. MSOPDS leverages Stackelberg Game analysis between the first attacker the subsequent attackers’ actions and projection techniques to navigate the gradient descent over discrete RecSys operations.