<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>成學</title>
    <link>https://hakk35.tistory.com/</link>
    <description></description>
    <language>ko</language>
    <pubDate>Sun, 10 May 2026 00:25:18 +0900</pubDate>
    <generator>TISTORY</generator>
    <ttl>100</ttl>
    <managingEditor>成學</managingEditor>
    
    <item>
      <title>[Paper Review] Evidential Knowledge Distillation</title>
      <link>https://hakk35.tistory.com/67</link>
      <description>&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/ICCV2025/papers/Xiang_Evidential_Knowledge_Distillation_ICCV_2025_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Evidential&amp;nbsp;Knowledge&amp;nbsp;Distillation&lt;/a&gt;&quot;&lt;br /&gt;presented at ICCV 2025.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;br /&gt;&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;기존의 로짓 기반 지식 증류 방법들은 확률 분포를 singularly deterministic으로 취급하여, 모델 예측에 내재된 본질적인 불확실성을 무시함.&lt;/li&gt;
&lt;li&gt;확률을 고정된 값이 아닌, second-order Dirichlet 분포에 의해 지배되는 확률 변수로 재정의하여 지식의 표현력을 확장함.&lt;/li&gt;
&lt;li&gt;macro(2차 분포의 기댓값을 정렬하여 클래스 간의 상대적 비율 관계를 최적화)와 micro(2차 분포 자체를 정렬하여 모델 출력의 수치적 크기를 일치)수준의 지식 전달을 결합한 새로운 증류 기법을 제안함.&lt;/li&gt;
&lt;li&gt;PAC-Bayesian 이론을 활용하여 EKD의 최적화 목표가 학생 모델의 expected risk에 대한 upper bound을 직접 최소화하는 것임을 증명함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;515&quot; data-origin-height=&quot;801&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wgGR7/dJMcadBEkyb/Ig0KYJ3hNboE6N8PSqZol1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wgGR7/dJMcadBEkyb/Ig0KYJ3hNboE6N8PSqZol1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wgGR7/dJMcadBEkyb/Ig0KYJ3hNboE6N8PSqZol1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwgGR7%2FdJMcadBEkyb%2FIg0KYJ3hNboE6N8PSqZol1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;515&quot; height=&quot;801&quot; data-origin-width=&quot;515&quot; data-origin-height=&quot;801&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;기존 KD 방법들은 categorical distribution(=prediction)이 유일하고 확정된 값(singularly deterministic)이라고 가정함.&lt;/li&gt;
&lt;li&gt;특정 샘플이 특정 클래스에 속할 확률이 확정적이며 이를 DNN으로 근사할 수 있다고 생각하지만, 실제로는 유한한 데이터와 모델 용량의 한계로 인해 예측에 본질적인 불확실성이 존재함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;e.g., 서로 다른 초기 weights를 가진 네트워크는 같은 test sample에 대해서도 다른 prediction을 만들어냄.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;이를 극복하기 위해, categorical distribution을 second-order distribution (Dirichlet distribution)에 의해 지배되는 random variable로 취급함.&lt;/li&gt;
&lt;li&gt;macro와 micro 관점을 통합하는 Evidential Knowledge Distillation을 제안함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;Macro: 2차 분포를 기댓값 연산을 통해 1차 분포로 축소하여 Dirichlet 분포의 중심점을 일치시킴. 이를 통해, 클래스 간의 상대적 비율 관계(e.g., 확신의 정도가 담긴 지도에서 무게 중심을 의미함.)를 최적화함.&lt;/li&gt;
&lt;li&gt;Micro: 2차 분포 자체를 정렬(e.g., 중심점뿐만 아니라 확신 지도의 모양과 깊이를 그대로 학습)하여 모델 출력의 크기를 정제하고 세밀한 분류 구조를 전달함.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Method&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Preliminaries&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;분류 작업에서 categorical probability vector &lt;i&gt;&lt;b&gt;p&lt;/b&gt;&lt;/i&gt;를 고정된 값이 아닌, Dirichlet 분포를 따르는 확률 변수로 취급함.&lt;/li&gt;
&lt;li&gt;네트워크의 출력 로짓 &lt;i&gt;&lt;b&gt;z&lt;/b&gt;&lt;/i&gt;를 evidential activation function(exp)을 통해 비음수 evidence vector &lt;i&gt;&lt;b&gt;e&lt;/b&gt;&lt;/i&gt;로 변환함.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;evidentional vector &lt;i&gt;&lt;b&gt;e&lt;/b&gt;&lt;/i&gt;와 사전 가중치 \lambda 를 결합하여 Dirichlet 분포 Dir(\alpha)를 규정하는 파라미터 \alpha를 생성함.&lt;/li&gt;
&lt;li&gt;EDL cross-entropy loss는 아래의 수식을 통해 계산함.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;386&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bVd4dW/dJMcabKKwcQ/wUSkykTURtGxakOPbosOoK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bVd4dW/dJMcabKKwcQ/wUSkykTURtGxakOPbosOoK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bVd4dW/dJMcabKKwcQ/wUSkykTURtGxakOPbosOoK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbVd4dW%2FdJMcabKKwcQ%2FwUSkykTURtGxakOPbosOoK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;510&quot; height=&quot;304&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;386&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;evidence 수집을 극대화하기 위해, 교사 및 학생 모델의 모든 훈련 단계에서 기존 cross-entropy 대신 이 손실 함수를 사용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Evidential Knowledge Distillation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;기존 KD들은 categorical probability를 singularly deterministic value로 보기 때문에 모델 예측의 uncertainty를 간과함. (제한적인 granular information 공유)&lt;/li&gt;
&lt;li&gt;네트워크 예측의 불확실성을 포함하기 위해 second-order Dirichlet 분포를 활용함. 구체적으로는, 2차 분포를 평균 내어 얻은 1차 분포(macro)와 2차 분포 자체(micro)를 모두 정렬하는 방식을 활용함.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Macro (first-order distillation)&lt;/b&gt;&lt;b&gt;&lt;b&gt;&lt;br /&gt;&lt;/b&gt;&lt;/b&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;646&quot; data-origin-height=&quot;673&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dq5bVK/dJMcajvbxAV/Jl7ThbjR1QirAuGDrRcZH0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dq5bVK/dJMcajvbxAV/Jl7ThbjR1QirAuGDrRcZH0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dq5bVK/dJMcajvbxAV/Jl7ThbjR1QirAuGDrRcZH0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdq5bVK%2FdJMcajvbxAV%2FJl7ThbjR1QirAuGDrRcZH0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;535&quot; height=&quot;557&quot; data-origin-width=&quot;646&quot; data-origin-height=&quot;673&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Micro (second-order distillation)&lt;br /&gt;&lt;/b&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;420&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/9jdKh/dJMcaib1CpX/i6I6UcfceiPi2hIIcRjWj1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/9jdKh/dJMcaib1CpX/i6I6UcfceiPi2hIIcRjWj1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/9jdKh/dJMcaib1CpX/i6I6UcfceiPi2hIIcRjWj1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F9jdKh%2FdJMcaib1CpX%2Fi6I6UcfceiPi2hIIcRjWj1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;535&quot; height=&quot;347&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;420&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;644&quot; data-origin-height=&quot;198&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mqOib/dJMcafGl9QM/I5eM24E65o3oYAT5M0naSK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mqOib/dJMcafGl9QM/I5eM24E65o3oYAT5M0naSK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mqOib/dJMcafGl9QM/I5eM24E65o3oYAT5M0naSK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmqOib%2FdJMcafGl9QM%2FI5eM24E65o3oYAT5M0naSK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;524&quot; height=&quot;161&quot; data-origin-width=&quot;644&quot; data-origin-height=&quot;198&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&amp;nbsp;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;b&gt;최종 손실 함수&lt;br /&gt;&lt;/b&gt;&lt;/b&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;647&quot; data-origin-height=&quot;235&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cToh3q/dJMcaaE3X6g/ERcPRnu1TPafnKJpw2X3h0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cToh3q/dJMcaaE3X6g/ERcPRnu1TPafnKJpw2X3h0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cToh3q/dJMcaaE3X6g/ERcPRnu1TPafnKJpw2X3h0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcToh3q%2FdJMcaaE3X6g%2FERcPRnu1TPafnKJpw2X3h0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;528&quot; height=&quot;192&quot; data-origin-width=&quot;647&quot; data-origin-height=&quot;235&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000;&quot; data-ke-size=&quot;size23&quot;&gt;Theoretical Analysis&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;EKD의 손실 함수에 대한 이론적 근거를 제공하기 위해 PAC-Bayesian 이론을 적용함. 훈련 샘플에서의 정렬을 바탕으로 전체 데이터 분포에 대한 네트워크 간 정렬 상태를 추정하는 것이 목표임.&lt;/li&gt;
&lt;li&gt;네트워크가 데이터를 학습한 결과물인 Dirichlet 분포는, 사실상 수많은 잠재적 분류기들 중에서 어떤 분류기가 더 타당한지를 나타내는 확률적인 지도(posterior distribution) 역할을 함.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;375&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dZPMTo/dJMcacJD7Ac/JpEezDkQ3nOqX8WyDIKdn1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dZPMTo/dJMcacJD7Ac/JpEezDkQ3nOqX8WyDIKdn1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dZPMTo/dJMcacJD7Ac/JpEezDkQ3nOqX8WyDIKdn1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdZPMTo%2FdJMcacJD7Ac%2FJpEezDkQ3nOqX8WyDIKdn1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;524&quot; height=&quot;303&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;375&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;기대 위험은 전체 데이터 분포 &lt;i&gt;D&lt;/i&gt;에 대해 학생 모델이 틀릴 실제 위험이고, 경험적 위험은 현재 가진 훈련 데이터셋에서 측정한 위험이라고 할때, 학생의 기대 위험은 경험적 위험, 2차 분포간 유사도 (KL 발산), 데이터 샘플 수와 관련된 상수 (C)의 합보다 작거나 같음.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;644&quot; data-origin-height=&quot;518&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bCbGei/dJMcaipxBBo/cGWPdMJ6rBYqz7ooR0Z09K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bCbGei/dJMcaipxBBo/cGWPdMJ6rBYqz7ooR0Z09K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bCbGei/dJMcaipxBBo/cGWPdMJ6rBYqz7ooR0Z09K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbCbGei%2FdJMcaipxBBo%2FcGWPdMJ6rBYqz7ooR0Z09K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;517&quot; height=&quot;416&quot; data-origin-width=&quot;644&quot; data-origin-height=&quot;518&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;649&quot; data-origin-height=&quot;192&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ckAgm7/dJMcab40tgj/FfLULDcJiH6zDMGjx0fzxK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ckAgm7/dJMcab40tgj/FfLULDcJiH6zDMGjx0fzxK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ckAgm7/dJMcab40tgj/FfLULDcJiH6zDMGjx0fzxK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FckAgm7%2FdJMcab40tgj%2FFfLULDcJiH6zDMGjx0fzxK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;497&quot; height=&quot;147&quot; data-origin-width=&quot;649&quot; data-origin-height=&quot;192&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;기존 KD 방식처럼, 단순히 경험적 위험만 줄이는 것은 실전에서의 기대 위험 감소를 보장하지 못하며, 교사의 일반화 성능을 전달하는 데 불완전함.&lt;/li&gt;
&lt;li&gt;EKD는 위에서 도출된 수학적 upper bound를 직접적인 최적화 목표로 함. 즉, 정답 비율(first-order)와 분포 형태(second-order)를 동시에 정렬함으로써 학생의 실전 위험을 최소화하려고 함..&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000;&quot; data-ke-size=&quot;size23&quot;&gt;Toy Case&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;639&quot; data-origin-height=&quot;863&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bqsfj6/dJMcadhsg6W/H4p3aPHavKCGs6OIP8fzT1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bqsfj6/dJMcadhsg6W/H4p3aPHavKCGs6OIP8fzT1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bqsfj6/dJMcadhsg6W/H4p3aPHavKCGs6OIP8fzT1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbqsfj6%2FdJMcadhsg6W%2FH4p3aPHavKCGs6OIP8fzT1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;524&quot; height=&quot;708&quot; data-origin-width=&quot;639&quot; data-origin-height=&quot;863&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1021&quot; data-origin-height=&quot;610&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wwiGx/dJMcah5fdPl/EXrWpKx48hxAnu3LRsB4n0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wwiGx/dJMcah5fdPl/EXrWpKx48hxAnu3LRsB4n0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wwiGx/dJMcah5fdPl/EXrWpKx48hxAnu3LRsB4n0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwwiGx%2FdJMcah5fdPl%2FEXrWpKx48hxAnu3LRsB4n0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1021&quot; height=&quot;610&quot; data-origin-width=&quot;1021&quot; data-origin-height=&quot;610&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;604&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bFHCYv/dJMcacQnIt4/Lex5MYgHyasEKuLluR3u9K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bFHCYv/dJMcacQnIt4/Lex5MYgHyasEKuLluR3u9K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bFHCYv/dJMcacQnIt4/Lex5MYgHyasEKuLluR3u9K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbFHCYv%2FdJMcacQnIt4%2FLex5MYgHyasEKuLluR3u9K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1024&quot; height=&quot;604&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;604&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;519&quot; data-origin-height=&quot;350&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qJI0k/dJMcaadXagh/elScDDmVKTlbXxXaSvs9ok/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qJI0k/dJMcaadXagh/elScDDmVKTlbXxXaSvs9ok/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qJI0k/dJMcaadXagh/elScDDmVKTlbXxXaSvs9ok/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FqJI0k%2FdJMcaadXagh%2FelScDDmVKTlbXxXaSvs9ok%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;519&quot; height=&quot;350&quot; data-origin-width=&quot;519&quot; data-origin-height=&quot;350&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 논문은 evidential second-order distribution을 사용하여 예측 불확실성을 포착하고, 보다 포괄적인 knowledge representation을 제공함.&lt;/li&gt;
&lt;li&gt;EKD 방법은 macro와 micro 수준 모두에서 지식 전달이 가능하도록 설계됨.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;macro에서는 second-order의 기댓값(global characteristics)을 정렬함으로써 학생 모델 출력의 클래스 간 비율 최적화를 향상시킴.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;micro에서는 second-order distribution을 정렬하여 학생 모델 출력의 수치적 크기를 일치시킴.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;PAC-Bayesian 이론을 통해 EKD가 학생 모델의 기대 위험에 대한 upper bound을 직접 최적화한다는 것을 증명함.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Knowledge Distillation</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/67</guid>
      <comments>https://hakk35.tistory.com/67#entry67comment</comments>
      <pubDate>Wed, 22 Apr 2026 21:41:21 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] Knowledge Distillation with Refined Logits</title>
      <link>https://hakk35.tistory.com/66</link>
      <description>&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/ICCV2025/papers/Sun_Knowledge_Distillation_with_Refined_Logits_ICCV_2025_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Knowledge&amp;nbsp;Distillation&amp;nbsp;with&amp;nbsp;Refined&amp;nbsp;Logits&lt;/a&gt;&quot;&lt;br /&gt;presented at ICCV 2025.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;br /&gt;&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;고성능 교사 모델도 틀린 예측을 할 수 있으며, 이를 무조건 따르게 하면 학생 모델의 학습 목표가 정답 레이블과 충돌하게 됨. 기존의 correction-based method와 달리, class correlation을 유지하면서 오답을 정제하는 방식을 제안함.&lt;/li&gt;
&lt;li&gt;sample confidence를 통해 학생이 정답에 대해 가져야 할 적절한 confidence 수준을 가르치고, masked correlation을 통해 교사가 정답보다 높게 평가한 오답들은 masking하고 나머지 클래스들 간의 class correlation을 배우게 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;658&quot; data-origin-height=&quot;712&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/yhzgq/dJMcafzykDe/7kKZ2zQ4VzPzjgb7g6kSVK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/yhzgq/dJMcafzykDe/7kKZ2zQ4VzPzjgb7g6kSVK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/yhzgq/dJMcafzykDe/7kKZ2zQ4VzPzjgb7g6kSVK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fyhzgq%2FdJMcafzykDe%2F7kKZ2zQ4VzPzjgb7g6kSVK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;468&quot; height=&quot;506&quot; data-origin-width=&quot;658&quot; data-origin-height=&quot;712&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;대부분의 이전 KD 방법들은 교사의 예측이 옳다고 가정하지만, 실제는 교사가 틀릴 수 있으며 이는 exacerbated divergence b/w standard distillation loss와 cross-entropy loss를 야기함.&lt;/li&gt;
&lt;li&gt;기존의 correction-based distillation는 교사의 predicted maximum class을 실제 class로 바꾸거나(Swap), 실제 정답에 해당하는 확률값을 amplify하는 방법(Augment)을 적용하지만, 이는 class correlation (=high-level semantic relationships)를 훼손함.&lt;/li&gt;
&lt;li&gt;교사의 오답을 제거하면서도, 필수적인 class correlation를 보존하기 위해 Refined Logit Distillation을 제안함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;sample confidnece (SC): 학생의 true class probability를 교사의 predicted class probability와 정렬하여, 교사의 실수를 완화함. 이는 학생이 적절한 신뢰도를 가지도록 유도하고, 과적합을 방지함.&lt;/li&gt;
&lt;li&gt;masked correlation (MC): 교사가 true class보다 더 높게 평가한 오답 클래스들을 동적으로 마스킹하여 오정보를 제거하고, 남은 클래스 간의 유의미한 class correlation을 학생에게 전달함. 이를 통해, 선생모델의 오답이 많으면 더 적은 class가 활용되고, 오답이 적으면 더 많은 클래스가 distillation에 활용됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Methodology&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1343&quot; data-origin-height=&quot;767&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/v97DX/dJMcaa54MxW/VQrXl4d1btqD1U9TZpdzc1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/v97DX/dJMcaa54MxW/VQrXl4d1btqD1U9TZpdzc1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/v97DX/dJMcaa54MxW/VQrXl4d1btqD1U9TZpdzc1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fv97DX%2FdJMcaa54MxW%2FVQrXl4d1btqD1U9TZpdzc1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1343&quot; height=&quot;767&quot; data-origin-width=&quot;1343&quot; data-origin-height=&quot;767&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;658&quot; data-origin-height=&quot;698&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/becrdz/dJMcaakGSNf/KbKtUx8xM9lnbfJRDUVyKk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/becrdz/dJMcaakGSNf/KbKtUx8xM9lnbfJRDUVyKk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/becrdz/dJMcaakGSNf/KbKtUx8xM9lnbfJRDUVyKk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbecrdz%2FdJMcaakGSNf%2FKbKtUx8xM9lnbfJRDUVyKk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;410&quot; height=&quot;435&quot; data-origin-width=&quot;658&quot; data-origin-height=&quot;698&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Sample Confidnece Distillation&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;662&quot; data-origin-height=&quot;313&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bdocJH/dJMcacv4hKA/0oM9iQfrdNwVQKDyTjIj2k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bdocJH/dJMcacv4hKA/0oM9iQfrdNwVQKDyTjIj2k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bdocJH/dJMcacv4hKA/0oM9iQfrdNwVQKDyTjIj2k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbdocJH%2FdJMcacv4hKA%2F0oM9iQfrdNwVQKDyTjIj2k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;461&quot; height=&quot;218&quot; data-origin-width=&quot;662&quot; data-origin-height=&quot;313&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;SCD는 모델이 특정 샘플에 대해 가지는 확신의 정도를 이진 확률 분포로 정의하여 전달함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;교사의 경우, 교사가 가장 높게 예측한 확률과 나머지 클래스들의 확률 합계로 구성됨.&lt;/li&gt;
&lt;li&gt;학생의 경우, 학생이 실제 정답 클래스에 대해 예측한 확률과 나머지 클래스들의 확률 합계로 구성됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000;&quot; data-ke-size=&quot;size23&quot;&gt;Masked Correlation Distillation&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;649&quot; data-origin-height=&quot;594&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Itbag/dJMb9969VJc/aksGScAcF1P0Oy2VgowPF1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Itbag/dJMb9969VJc/aksGScAcF1P0Oy2VgowPF1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Itbag/dJMb9969VJc/aksGScAcF1P0Oy2VgowPF1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FItbag%2FdJMb9969VJc%2FaksGScAcF1P0Oy2VgowPF1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;492&quot; height=&quot;450&quot; data-origin-width=&quot;649&quot; data-origin-height=&quot;594&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;MCD는 교사가 헷갈려 하는 클래스들을 동적으로 가려, 유의미한 class correlation을 전달함.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;교사가 정확할 때는 마스킹되는 클래스가 적어 대부분의 class correlation이 학생에게 전달되고, 교사가 부정확할때는 많은 클래스가 마스킹되어 학생이 교사의 오답 정보에 휘둘리지 않도록 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Refined Logit Distillation&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;652&quot; data-origin-height=&quot;74&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/7LQbW/dJMcacphzYV/Ky3UkkdkViW5atsGlQQRTK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/7LQbW/dJMcacphzYV/Ky3UkkdkViW5atsGlQQRTK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/7LQbW/dJMcacphzYV/Ky3UkkdkViW5atsGlQQRTK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F7LQbW%2FdJMcacphzYV%2FKy3UkkdkViW5atsGlQQRTK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;494&quot; height=&quot;56&quot; data-origin-width=&quot;652&quot; data-origin-height=&quot;74&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1344&quot; data-origin-height=&quot;786&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cp2Aay/dJMcadhpLQW/6e6i11AcpHZSV1jKGgcJxk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cp2Aay/dJMcadhpLQW/6e6i11AcpHZSV1jKGgcJxk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cp2Aay/dJMcadhpLQW/6e6i11AcpHZSV1jKGgcJxk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcp2Aay%2FdJMcadhpLQW%2F6e6i11AcpHZSV1jKGgcJxk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1344&quot; height=&quot;786&quot; data-origin-width=&quot;1344&quot; data-origin-height=&quot;786&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1345&quot; data-origin-height=&quot;760&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bbbu4q/dJMcafsK2IC/QKV3Aeg1qufbdbAwflbdZ0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bbbu4q/dJMcafsK2IC/QKV3Aeg1qufbdbAwflbdZ0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bbbu4q/dJMcafsK2IC/QKV3Aeg1qufbdbAwflbdZ0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbbbu4q%2FdJMcafsK2IC%2FQKV3Aeg1qufbdbAwflbdZ0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1345&quot; height=&quot;760&quot; data-origin-width=&quot;1345&quot; data-origin-height=&quot;760&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Reversed Knowledge Distillation&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;656&quot; data-origin-height=&quot;449&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/DOGbm/dJMcaaSybck/fVNbcOItKdWkfzEeD6DcG0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/DOGbm/dJMcaaSybck/fVNbcOItKdWkfzEeD6DcG0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/DOGbm/dJMcaaSybck/fVNbcOItKdWkfzEeD6DcG0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDOGbm%2FdJMcaaSybck%2FfVNbcOItKdWkfzEeD6DcG0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;488&quot; height=&quot;334&quot; data-origin-width=&quot;656&quot; data-origin-height=&quot;449&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;교사 모델이 학생 모델보다 성능이 낮은 상황에서의 성능 개선을 분석함. (성능이 낮은 교사 모델을 사용하여 더 뛰어난 학생 모델의 성능을 향상시킬 수 있는지)&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;Logit Discrepancy Visualization&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;660&quot; data-origin-height=&quot;458&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b3aa9J/dJMcabw7nqn/fFAk6JdMKbrIeB82YQZqcK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b3aa9J/dJMcabw7nqn/fFAk6JdMKbrIeB82YQZqcK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b3aa9J/dJMcabw7nqn/fFAk6JdMKbrIeB82YQZqcK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb3aa9J%2FdJMcabw7nqn%2FfFAk6JdMKbrIeB82YQZqcK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;514&quot; height=&quot;357&quot; data-origin-width=&quot;660&quot; data-origin-height=&quot;458&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;RLD가 DKD보다 더 높은 성능을 보임에도 불구하고, 실제 logit 차이는 DKD보다 RLD에서 더 높음. 이는 RLD가 교사의 잘못된 지식을 수정하고, 학생 모델에게 독자적인 예측을 형성할 수 있는 자율성을 부여했기 때문임.&lt;/li&gt;
&lt;li&gt;교사의 지식을 무조건적으로 따르는 것이 최적의 전략이 아니며, 오답을 교정하는 방식이 필수적임.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Ablation Study&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;675&quot; data-origin-height=&quot;499&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Hc6jw/dJMcahqBiZI/yPCFeYKE6vCNtPziLxfW00/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Hc6jw/dJMcahqBiZI/yPCFeYKE6vCNtPziLxfW00/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Hc6jw/dJMcahqBiZI/yPCFeYKE6vCNtPziLxfW00/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHc6jw%2FdJMcahqBiZI%2FyPCFeYKE6vCNtPziLxfW00%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;523&quot; height=&quot;387&quot; data-origin-width=&quot;675&quot; data-origin-height=&quot;499&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;실제 정답보다 큰 클래스만 마스킹하는 방식보다, 정답과 같거나 큰 클래스를 모두 마스킹하는 방식이 더 성능이 좋음. M_g 방식의 경우, 실제 정답 클래스에 대한 지식이 SCD와 MCD에서 중복되어 나타나 학습 목표 간의 Conflict을 일으키기 때문임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;기존의 지식 증류 방법들은 교사의 잘못된 예측이 학생 모델에게 미치는 부정적인 영향을 충분히 고려하지 못했음. 교사의 출력을 임의로 수정하는 방식이 활용될 수 있지만, 이는 class correlation을 훼손함.&lt;/li&gt;
&lt;li&gt;이를 해결하기 위해, sample confidence와 masked correlation을 제안함.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Knowledge Distillation</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/66</guid>
      <comments>https://hakk35.tistory.com/66#entry66comment</comments>
      <pubDate>Sun, 19 Apr 2026 21:50:29 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] What to Distill? Fast Knowledge Distillation with Adaptive Sampling</title>
      <link>https://hakk35.tistory.com/65</link>
      <description>&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/ICCV2025/papers/Chae_What_to_Distill_Fast_Knowledge_Distillation_with_Adaptive_Sampling_ICCV_2025_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;What&amp;nbsp;to&amp;nbsp;Distill?&amp;nbsp;Fast&amp;nbsp;Knowledge&amp;nbsp;Distillation&amp;nbsp;with&amp;nbsp;Adaptive&amp;nbsp;Sampling&lt;/a&gt;&quot;&lt;br /&gt;presented at ICCV 2025.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;br /&gt;&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;지식 증류에 기여하는 데이터의 특성을 규명하기 위해 교사-학생 간 차이인 &lt;b&gt;quantity of knowledge&lt;/b&gt;과 교사-정답 간 차이인 &lt;b&gt;quality of knowledge&lt;/b&gt;이라는 분석 지표를 정의함.&lt;/li&gt;
&lt;li&gt;지식의 양이 풍부한 샘플을 우선적으로 선택하는 &lt;b&gt;quantity-based subsampling&lt;/b&gt;과 quality가 낮은 지식의 영향력을 줄이는 &lt;b&gt;quality-calibrated loss weighting&lt;/b&gt;을 제안함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1346&quot; data-origin-height=&quot;516&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cTW02l/dJMcaf0zUyR/DsVAMZLI3RQVceSqbKAgd0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cTW02l/dJMcaf0zUyR/DsVAMZLI3RQVceSqbKAgd0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cTW02l/dJMcaf0zUyR/DsVAMZLI3RQVceSqbKAgd0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcTW02l%2FdJMcaf0zUyR%2FDsVAMZLI3RQVceSqbKAgd0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1346&quot; height=&quot;516&quot; data-origin-width=&quot;1346&quot; data-origin-height=&quot;516&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;데이터의 선택은 KD의 효과와 효율성에 큰 영향을 미치지미나, 지금까지의 KD 방법들은 데이터 자체가 미치는 영향을 충분히 고려하지 않음.&lt;/li&gt;
&lt;li&gt;모든 데이터가 증류 과정에서 똑같이 기여하는 것이 아니며, 특정 데이터는 학생 모델의 학습을 더 효과적으로 강화할 수 있는 풍부한 정보를 담고 있음.&lt;/li&gt;
&lt;li&gt;데이터의 영향을 평가하기 위해 &lt;i&gt;&lt;b&gt;quantity of knowledge&lt;/b&gt;&lt;/i&gt;와 &lt;i&gt;&lt;b&gt;quality of knowledge&lt;/b&gt;&lt;/i&gt;의&amp;nbsp;두 가지 관점을 정의하고, 이를 바탕으로 학습에 부정적이거나 영향이 적은 샘플을 제외하고 좋은 샘플만 동적으로 선택하는 KDAS를 제안함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;quantity-based subsampleing&lt;/li&gt;
&lt;li&gt;quality-calibrated loss weighting&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Distillation Effciency Analysis&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;KL divergence&lt;/b&gt;를 통해서 특정 sample에 대한 quantity와 quality를 측정함.&lt;/li&gt;
&lt;li&gt;Quantity of knowledge: 교사와 학생 모델 간의 예측 차이로 측정하며, 값이 클수록 학생이 교사로부터 배울 수 있는 정보 밀도가 높음.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;293&quot; data-origin-height=&quot;56&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cegEnP/dJMcacbJWSx/XpkF8c2rVaRpkPWQp0RkQK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cegEnP/dJMcacbJWSx/XpkF8c2rVaRpkPWQp0RkQK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cegEnP/dJMcacbJWSx/XpkF8c2rVaRpkPWQp0RkQK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcegEnP%2FdJMcacbJWSx%2FXpkF8c2rVaRpkPWQp0RkQK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;230&quot; height=&quot;44&quot; data-origin-width=&quot;293&quot; data-origin-height=&quot;56&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;Quality of knowledge: 교사의 예측과 정답 간의 차이로 측정하며, 너무 크거나 작으면 지식의 quality가 낮음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;290&quot; data-origin-height=&quot;51&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/3AArj/dJMcagSKN59/hD88EBf9EHcVFpU3dUrfvk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/3AArj/dJMcagSKN59/hD88EBf9EHcVFpU3dUrfvk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/3AArj/dJMcagSKN59/hD88EBf9EHcVFpU3dUrfvk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F3AArj%2FdJMcagSKN59%2FhD88EBf9EHcVFpU3dUrfvk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;245&quot; height=&quot;43&quot; data-origin-width=&quot;290&quot; data-origin-height=&quot;51&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Finding 1: Quantity of Knowledge&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;663&quot; data-origin-height=&quot;660&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cvE6m0/dJMcajaPkPw/66Qu6zKOAnXdzwoNz5s3e1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cvE6m0/dJMcajaPkPw/66Qu6zKOAnXdzwoNz5s3e1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cvE6m0/dJMcajaPkPw/66Qu6zKOAnXdzwoNz5s3e1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcvE6m0%2FdJMcajaPkPw%2F66Qu6zKOAnXdzwoNz5s3e1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;497&quot; height=&quot;495&quot; data-origin-width=&quot;663&quot; data-origin-height=&quot;660&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;교사와 학생의 예측이 크게 다른 샘플이 증류에 가장 효과적이며, 차이가 적은 샘플은 증류 결과에 거의 영향을 주지 않음.&lt;/li&gt;
&lt;li&gt;Hard Example Mining의 경우, 전체 training loss을 기준으로 어려운 샘플을 찾지만, 본 논문은 soft target loss를 활용함. 실험 결과, 전체 training loss가 높은 샘플을 활용하는 것보다 교사-학생 간 KL 값이 높은 샘플을 활용하는 것이 효과적임.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Finding 2: Curriculum Sampling&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;662&quot; data-origin-height=&quot;409&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Ctawn/dJMcad2JCsS/QaYc6w65IA1gaDGk1m3cwk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Ctawn/dJMcad2JCsS/QaYc6w65IA1gaDGk1m3cwk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Ctawn/dJMcad2JCsS/QaYc6w65IA1gaDGk1m3cwk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCtawn%2FdJMcad2JCsS%2FQaYc6w65IA1gaDGk1m3cwk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;482&quot; height=&quot;298&quot; data-origin-width=&quot;662&quot; data-origin-height=&quot;409&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;샘플링 비율을 고정하여 사용하는 것보다 샘플링 비율을 동적으로 감소시키면서 학습할 때 더 높은 성능을 기록함. (총 샘플수는 동일)&lt;/li&gt;
&lt;li&gt;학습 초기에 교사-학생 간의 지식 차이가 큰 샘플이 많이 때문에, 초기에 더 많은 샘플을 선택하여 학습하는 것이 유리함.&lt;/li&gt;
&lt;li&gt;쉬운 샘플에서 어려운 샘플 순으로 학습하는 일반적인 커리큘럼 샘플링 방식과 달리, KDAS는 지식의 양을 활용하여 샘플의 수 자체를 조절함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Finding 3: Quality of Knowledge&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;659&quot; data-origin-height=&quot;389&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ZAav0/dJMcadn9u8C/Cc0SIXbeGWxnNgHu2OVKEK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ZAav0/dJMcadn9u8C/Cc0SIXbeGWxnNgHu2OVKEK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ZAav0/dJMcadn9u8C/Cc0SIXbeGWxnNgHu2OVKEK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FZAav0%2FdJMcadn9u8C%2FCc0SIXbeGWxnNgHu2OVKEK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;501&quot; height=&quot;296&quot; data-origin-width=&quot;659&quot; data-origin-height=&quot;389&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;교사 모델과 정답 사이의 KL 값이 중간 수준인 샘플들을 사용하는 것이 가장 좋은 성능을 기록함.&lt;/li&gt;
&lt;li&gt;교사의 예측이 정답 레이블과 거의 일치하는 경우 교사는 dark knowledge를 충분히 제공하지 못하고, 교사의 예측과 정답이 너무 다른 경우, 교사가 학상에게 잘못된 지식을 가르칠 위험이 커짐.&lt;/li&gt;
&lt;li&gt;따라서, 가장 효과적인 지식 전달은 교사의 분포가 정답과 적당히 다르면서도, 동시에 정답과 충분히 정렬되어 있어야 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Finding 4: Penalization&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;657&quot; data-origin-height=&quot;498&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/xQSXx/dJMcafM6qWs/W0cffbrTrQY2azNvPEW091/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/xQSXx/dJMcafM6qWs/W0cffbrTrQY2azNvPEW091/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/xQSXx/dJMcafM6qWs/W0cffbrTrQY2azNvPEW091/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FxQSXx%2FdJMcafM6qWs%2FW0cffbrTrQY2azNvPEW091%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;493&quot; height=&quot;374&quot; data-origin-width=&quot;657&quot; data-origin-height=&quot;498&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Quality가 높은 샘플에 더 많은 weight을 부여함으로써 효율적으로 학습할 수 있음.&lt;/li&gt;
&lt;li&gt;교사-정답 간의 KL 값이 특정 임계값(lower bound and upper bound)을 벗어나면 해당 샘플의 영향력을 줄임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Faster Knowledge Distillation&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Quantity-based subsampling&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;640&quot; data-origin-height=&quot;349&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/1BSbS/dJMcagrFrxy/wi581C0dRKfSlrBf7EiD70/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/1BSbS/dJMcagrFrxy/wi581C0dRKfSlrBf7EiD70/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/1BSbS/dJMcagrFrxy/wi581C0dRKfSlrBf7EiD70/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F1BSbS%2FdJMcagrFrxy%2Fwi581C0dRKfSlrBf7EiD70%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;543&quot; height=&quot;296&quot; data-origin-width=&quot;640&quot; data-origin-height=&quot;349&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;매 epoch 마다 모든 샘플의 quantity of knowledge value를 계산하는 것은 교사 모델의 연산 비용 때문에 부담이 될 수 있음. 이를 위해 특정 주기마다 산발적으로 샘플링을 수행하여 전체 증류 시간을 효과적으로 단축함.&lt;/li&gt;
&lt;li&gt;실험 결과, 매 epoch이 아닌 산발적인 샘플링 주기를 적용해도 지식 증류 성능이 충분히 유지됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Quality-calibrated loss weighting&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;649&quot; data-origin-height=&quot;524&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/daIjnU/dJMcabKHTtg/N4KZWfBRJ6mraKjnYLL4i0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/daIjnU/dJMcabKHTtg/N4KZWfBRJ6mraKjnYLL4i0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/daIjnU/dJMcabKHTtg/N4KZWfBRJ6mraKjnYLL4i0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdaIjnU%2FdJMcabKHTtg%2FN4KZWfBRJ6mraKjnYLL4i0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;562&quot; height=&quot;454&quot; data-origin-width=&quot;649&quot; data-origin-height=&quot;524&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;597&quot; data-origin-height=&quot;175&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/23wPZ/dJMcacJBvMX/KeaTyUsR3MIRqXngEhtDkK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/23wPZ/dJMcacJBvMX/KeaTyUsR3MIRqXngEhtDkK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/23wPZ/dJMcacJBvMX/KeaTyUsR3MIRqXngEhtDkK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F23wPZ%2FdJMcacJBvMX%2FKeaTyUsR3MIRqXngEhtDkK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;522&quot; height=&quot;153&quot; data-origin-width=&quot;597&quot; data-origin-height=&quot;175&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;학습 초기에는 교사를 신뢰할 수 있도록 warmup을 두며, 이후 패널티 강도를 점진적으로 높임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Evaluation&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1349&quot; data-origin-height=&quot;642&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mu88L/dJMcaiiIpBb/GQwOOHS0jK8c3DhyZgZKK0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mu88L/dJMcaiiIpBb/GQwOOHS0jK8c3DhyZgZKK0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mu88L/dJMcaiiIpBb/GQwOOHS0jK8c3DhyZgZKK0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fmu88L%2FdJMcaiiIpBb%2FGQwOOHS0jK8c3DhyZgZKK0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1349&quot; height=&quot;642&quot; data-origin-width=&quot;1349&quot; data-origin-height=&quot;642&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;660&quot; data-origin-height=&quot;300&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cEzfy3/dJMcaiJKFii/U99Gn26cjo03hVoyrxj0i1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cEzfy3/dJMcaiJKFii/U99Gn26cjo03hVoyrxj0i1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cEzfy3/dJMcaiJKFii/U99Gn26cjo03hVoyrxj0i1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcEzfy3%2FdJMcaiJKFii%2FU99Gn26cjo03hVoyrxj0i1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;508&quot; height=&quot;231&quot; data-origin-width=&quot;660&quot; data-origin-height=&quot;300&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;652&quot; data-origin-height=&quot;726&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bcgmm1/dJMcahc2PoF/NahkBhGqTgkKLFuKUFxrj0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bcgmm1/dJMcahc2PoF/NahkBhGqTgkKLFuKUFxrj0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bcgmm1/dJMcahc2PoF/NahkBhGqTgkKLFuKUFxrj0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbcgmm1%2FdJMcahc2PoF%2FNahkBhGqTgkKLFuKUFxrj0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;531&quot; height=&quot;591&quot; data-origin-width=&quot;652&quot; data-origin-height=&quot;726&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 연구는 지식 증류에서 데이터가 미치는 영향을 Quantity와 Qulity의 관점으로 분석함.&lt;/li&gt;
&lt;li&gt;효율적인 지식 증류를 위해 adaptive sampling method를 제안하였으며, 이는 지식 증류를 위한 좋은 샘플을 선택하고 적용하여 distillation process를 가속화하는 것임.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Knowledge Distillation</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/65</guid>
      <comments>https://hakk35.tistory.com/65#entry65comment</comments>
      <pubDate>Sun, 19 Apr 2026 16:16:32 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] VRM: Knowledge Distillation via Virtual Relation Matching</title>
      <link>https://hakk35.tistory.com/64</link>
      <description>&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/ICCV2025/papers/Zhang_VRM_Knowledge_Distillation_via_Virtual_Relation_Matching_ICCV_2025_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;VRM: Knowledge Distillation via Virtual Relation Matching&lt;/a&gt;&quot;&lt;br /&gt;presented at ICCV 2025.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;br /&gt;&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;672&quot; data-origin-height=&quot;581&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bmFxCl/dJMcaf7lGiX/NQPscGdvXeP21mCSkxbesK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bmFxCl/dJMcaf7lGiX/NQPscGdvXeP21mCSkxbesK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bmFxCl/dJMcaf7lGiX/NQPscGdvXeP21mCSkxbesK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbmFxCl%2FdJMcaf7lGiX%2FNQPscGdvXeP21mCSkxbesK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;508&quot; height=&quot;439&quot; data-origin-width=&quot;672&quot; data-origin-height=&quot;581&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Relation mathcing(RM)이 Instance mathcing(IM)보다 overfitting에 취약하고, 노이즈 샘플의 부정적인 gradient가 배치 전체로 확산되는 문제를 식별함.&lt;/li&gt;
&lt;li&gt;Virtual view를 생성하여 real-virtual sample 간의 상관관계를 학습 신호로 활용하여 Regularization를 강화하고 성능을 향상시킴.&lt;/li&gt;
&lt;li&gt;중북된 연산을 줄이고(redundant edges) 신뢰할 수 없는 관계를 차단(unreliable edges)하는 프루닝 전략을 적용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Pilot Studies&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1347&quot; data-origin-height=&quot;295&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/x1Z0s/dJMcagkS5no/ytaQP12bjddkMWOOnZkXSK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/x1Z0s/dJMcagkS5no/ytaQP12bjddkMWOOnZkXSK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/x1Z0s/dJMcagkS5no/ytaQP12bjddkMWOOnZkXSK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fx1Z0s%2FdJMcagkS5no%2FytaQP12bjddkMWOOnZkXSK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1347&quot; height=&quot;295&quot; data-origin-width=&quot;1347&quot; data-origin-height=&quot;295&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1343&quot; data-origin-height=&quot;271&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/baeohZ/dJMcaibYBWx/zmWPJFiIwfHsXqqPJAhIyK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/baeohZ/dJMcaibYBWx/zmWPJFiIwfHsXqqPJAhIyK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/baeohZ/dJMcaibYBWx/zmWPJFiIwfHsXqqPJAhIyK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbaeohZ%2FdJMcaibYBWx%2FzmWPJFiIwfHsXqqPJAhIyK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1343&quot; height=&quot;271&quot; data-origin-width=&quot;1343&quot; data-origin-height=&quot;271&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;RM은 IM 보다 제약 조건이 약한 목표(weaker objective)이기 때문에, 학생 모델이 훈련 데이터에만 과하게 적응하고 일반화 성능이 떨어지는 경향이 있음.&lt;/li&gt;
&lt;li&gt;배치 내 단 하나의 잘못된 예측(Spurious sample)이 관계 그래프를 통해 배치 전체 샘플의 그래디언트에 악영향을 미침.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Method&lt;/b&gt;&lt;/h2&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Inter-Sample Relations&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;581&quot; data-origin-height=&quot;89&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/0v8Af/dJMcaadVXot/ULRcpnv57yjHJggrIovoVK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/0v8Af/dJMcaadVXot/ULRcpnv57yjHJggrIovoVK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/0v8Af/dJMcaadVXot/ULRcpnv57yjHJggrIovoVK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F0v8Af%2FdJMcaadVXot%2FULRcpnv57yjHJggrIovoVK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;470&quot; height=&quot;72&quot; data-origin-width=&quot;581&quot; data-origin-height=&quot;89&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;정교한 categorical knowledge를 가지고 있는 predicted logits을 사용하여 relation 구축&lt;/li&gt;
&lt;li&gt;기존의 gram 행렬은 내적 연산 과정에서 클래스 간의 세부 지식이 하나의 값으로 합쳐져 사라지는 문제가 있어, VRM은 pairwise distance를 사용하여 클래스 차원을 따라 정보를 보존함. 이를 통해, 샘플 간 관계뿐만 아니라 클래스별 세부 정보를 보조 지식으로 전달함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;Inter-Class Relations&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;601&quot; data-origin-height=&quot;77&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wqe2X/dJMcadBC7uX/1VhIWqRUrXLRonZ3jiqyKK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wqe2X/dJMcadBC7uX/1VhIWqRUrXLRonZ3jiqyKK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wqe2X/dJMcadBC7uX/1VhIWqRUrXLRonZ3jiqyKK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fwqe2X%2FdJMcadBC7uX%2F1VhIWqRUrXLRonZ3jiqyKK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;507&quot; height=&quot;65&quot; data-origin-width=&quot;601&quot; data-origin-height=&quot;77&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;배치 단위의 불일치를 추가적인 지식으로 취급하여, 클래스 사이의 관계 정보를 더욱 풍부하게 추출함.&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;Virtual Relations&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;493&quot; data-origin-height=&quot;92&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/89250/dJMcajhAlE6/OEJNdJn0prYpOWlvFyvkB1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/89250/dJMcajhAlE6/OEJNdJn0prYpOWlvFyvkB1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/89250/dJMcajhAlE6/OEJNdJn0prYpOWlvFyvkB1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F89250%2FdJMcajhAlE6%2FOEJNdJn0prYpOWlvFyvkB1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;418&quot; height=&quot;78&quot; data-origin-width=&quot;493&quot; data-origin-height=&quot;92&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;원본 이미지에 의미를 보존하는 변환인 RandAugment를 적용하여 Virtual View를 생성함.&lt;/li&gt;
&lt;li&gt;Virtual View를 통해 생성된 가상 관계는 학생 모델에게 더 풍부한 가이드 신호를 제공하고 학습 과정 전반에서 강력한 Regularization을 수행함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;Graph Purning&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Redundant edges: Virtual view로 인해 4배 늘어난 오버헤드를 줄이기 위해, 대칭적인 그래프의 절반을 제거하고, intra-view edge를 추가로 제거하여, inter-view 관계에만 집중함.&lt;/li&gt;
&lt;li&gt;Unreliable edges: 두 예측 사이의 불일치를 측정하여 신뢰도를 평가함. 불일치가 클수록 해당 관계는 신뢰할 수 없다고 판단하며, 이를 동적으로 제거하여 잘못된 예측이 배치 전체로 확산되는 것을 방지함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1314&quot; data-origin-height=&quot;559&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nvGI4/dJMcahc2rIa/IA2bvP9KD8Drc8HRkO5b3K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nvGI4/dJMcahc2rIa/IA2bvP9KD8Drc8HRkO5b3K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nvGI4/dJMcahc2rIa/IA2bvP9KD8Drc8HRkO5b3K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnvGI4%2FdJMcahc2rIa%2FIA2bvP9KD8Drc8HRkO5b3K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1314&quot; height=&quot;559&quot; data-origin-width=&quot;1314&quot; data-origin-height=&quot;559&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1341&quot; data-origin-height=&quot;478&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b96qQ1/dJMcahKTABR/MqUOAPbU2JExcrKB1pj2jk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b96qQ1/dJMcahKTABR/MqUOAPbU2JExcrKB1pj2jk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b96qQ1/dJMcahKTABR/MqUOAPbU2JExcrKB1pj2jk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb96qQ1%2FdJMcahKTABR%2FMqUOAPbU2JExcrKB1pj2jk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1341&quot; height=&quot;478&quot; data-origin-width=&quot;1341&quot; data-origin-height=&quot;478&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Transformation operations&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;VRM의 성능 개선이 단순히 RandAugment 같은 특정 변환 기법에 기인하는 것은 아님.&lt;/li&gt;
&lt;li&gt;기본적인 변환(Weak transformation)만 사용해도, 확률적 연산에 의해 두 view 사이에 차이가 발생하며, 이로 인한 교사-학생 간의 Discrepancy가 핵심적인 Regularization 역할을 함.&lt;/li&gt;
&lt;li&gt;CIFAR100의 경우 데이터셋이 쉽고 예측 차이가 작기 때문에 추가적인 변환을 통해 인위적으로 이 차이를 넓히는 것이 중요하지만, ImageNet은 이미 데이터 자체가 어려워 교사-학생 간의 예측 불일치가 크기 때문에, 추가적인 변형을 통해 이를 증폭시키는 효과가 상대적으로 적게 나타남.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 연구를 통해 기존 관계 기반 지식 증류 방법이 가졌던 overfitting 취약성과 부정적인 그래디언트 전파 문제를 명확히 규명하고, 이를 해결하기 위해 VRM과 프루닝 전략을 적용함.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Knowledge Distillation</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/64</guid>
      <comments>https://hakk35.tistory.com/64#entry64comment</comments>
      <pubDate>Sat, 18 Apr 2026 17:32:11 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] What Makes a Good Dataset for Knowledge Distillation?</title>
      <link>https://hakk35.tistory.com/54</link>
      <description>&lt;script&gt; MathJax = { tex: {inlineMath: [['$', '$']]} }; &lt;/script&gt;
&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js&quot;&gt;&lt;/script&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of&lt;span&gt; &quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/CVPR2025/papers/Frank_What_Makes_a_Good_Dataset_for_Knowledge_Distillation_CVPR_2025_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;What Makes a Good Dataset for Knowledge Distillation?&lt;/a&gt;&quot; presented at CVPR 2025.&lt;/span&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;일반적인 KD는 학생 모델을 학습할 때, 선생 모델이 학습한 원본 데이터셋을 사용할 수 있다는 가정이 있지만, 실제 application에서는 항상 가능한 것이 아님.&lt;/li&gt;
&lt;li&gt;이를 극복하기 위해, 'supplemental data'를 사용하는 것을 고려할 수 있음. 그렇다면, 어떤 데이터셋이 지식을 전달할 때에 좋은 데이터셋일까?&lt;/li&gt;
&lt;li&gt;Real하고, In-domain dataset 만이 유일한 방법이라고 생각할 수 있지만, 본 연구를 통해, unnatural synthetic dataset도 대안이 될 수 있음을 보임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;b&gt;&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;일반적으로, 지식 증류를 수행할 때 선생 모델을 학습할 때 활용한 데이터셋을 사용하지만, 원본 데이터에 항상 접근 가능하다는 가정은 실제 환경에서 타당하지 않음.&lt;/li&gt;
&lt;li&gt;이러한 한계를 극복하기 위해, 다음의 supplemental data를 사용하는 것을 고려할 수 있음.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;real in-domain examples&lt;/li&gt;
&lt;li&gt;real out-of-domain examples&lt;/li&gt;
&lt;li&gt;synthetic examples opimized to be ID&amp;nbsp;&lt;/li&gt;
&lt;li&gt;unoptimized unnatural synthetic OOD imagery (e.g. OpenGL shaders)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;일반적으로 real ID가 괜찮은 대안 데이터셋으로 생각되지만, most unconventional dataset을 통해서도 지식이 전달되는 것이 가능할까?&amp;nbsp;&amp;rarr; 지식 증류를 위한 데이터셋은 어떤 점이 필요하고,어떤 조건을 만족해야할까?&amp;nbsp;&lt;/li&gt;
&lt;li&gt;본 논문을 통해서,&amp;nbsp;
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;성공적인 지식 증류를 가능하게 하는 데이터셋의 핵심 특징을 확인하고,&lt;/li&gt;
&lt;li&gt;unnatural synthetic OOD data를 사용해도 성공적으로 지식증류가 가능함을 보임.&lt;/li&gt;
&lt;li&gt;또한, adversarial attack 전략을 통해서, 이러한 지식 전달을 향상시킴.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Related Work&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Knoweldge Distillation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;[&lt;a href=&quot;https://arxiv.org/pdf/2106.05237&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;1&lt;/a&gt;]:&amp;nbsp;KD를 &lt;i&gt;function matching&lt;/i&gt;의 시각으로 다루며, 강한 mixup을 적용한 것이 학생 성능을 향상시킨다는 것을 보여줌.&lt;/li&gt;
&lt;li&gt;[&lt;a href=&quot;https://ojs.aaai.org/index.php/AAAI/article/view/4263&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;2&lt;/a&gt;, &lt;a href=&quot;https://ieeexplore.ieee.org/abstract/document/10175589?casa_token=4qrdjACGhAAAAAAA:JirMcrFc5Xw3RDUCBAW_hI0-xXR-w2QKuzeclivjlMrhqLHgz3FW7qkL1nE1GxumcQNF-xhu&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;3&lt;/a&gt;]: Adversarial examples이 선생모델의 decision boundary를 확인하게 만들어 성능 향상에 도움이 됨을 보여줌.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Utilizing Supplemental Data in Knowledge Distillation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;원본 데이터셋에 접근하지 못할 때, 대체 데이터셋을 활용하는 다양한 연구들이 있음. 그 중에서, [&lt;a href=&quot;https://arxiv.org/pdf/2401.06826&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;4&lt;/a&gt;, &lt;a href=&quot;https://openreview.net/pdf?id=fcqWJ8JgMR&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;5&lt;/a&gt;]은 KD와 domain adaptation을 결합하여, 완전히 다른 도메인(real images&amp;harr; drawings)에서 학생 모델이 선생모델과 동일한 클래스를 가지도록 학습함,&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Data-Free&amp;nbsp;Knowledge&amp;nbsp;Distillation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;DFKD는 KD에 도움이 되는 데이터를 만들기 위해, 1) generator network를 활용하거나 2) 내제된 선생모델의 statistics를 활용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Methodology&lt;/b&gt;&lt;/h2&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Natural Dataset Collection&amp;nbsp;&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;원본 데이터셋과 가장 많은 클래스 정보를 공유하는 데이터셋(e.g., ID)을 대체 데이터셋으로 선택하는 것이 타당하긴 하지만, ID여야만 해야할까? OOD는 활용할 수 없을까?&lt;/li&gt;
&lt;li&gt;이를 확인하기 위해 CIFAR10, CIFAR100, TinyImageNet, ImageNet (split into ID and OOD subsets), FGVC-Aircraft, Pets, Food, EuroSAT을 활용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Synthetic Dataset Collection&amp;nbsp;&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;ID 또는 OOD 데이터셋에서 더 나아가, 대체 데이터셋은 Real 이여야 할까? 기존 DFKD 연구들은 이미지 ID에 최적화된 합성 데이터셋 (Unnatural Iamges)를 고려했음. 하지만, 이러한 최적화가 정말로 필요할까?&lt;/li&gt;
&lt;li&gt;최적화되지 않고, 비자연스러운 OOD 합성 이미지를 사용하여 지식이 전달되는 지 확인하기 위해, OpenGL shaders, Leaves, Noise를 활용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;871&quot; data-origin-height=&quot;475&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vkXwB/btsQdnkoXvP/9klPzxdUklFWUapKlOdlTk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vkXwB/btsQdnkoXvP/9klPzxdUklFWUapKlOdlTk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vkXwB/btsQdnkoXvP/9klPzxdUklFWUapKlOdlTk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvkXwB%2FbtsQdnkoXvP%2F9klPzxdUklFWUapKlOdlTk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;466&quot; height=&quot;254&quot; data-origin-width=&quot;871&quot; data-origin-height=&quot;475&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;$\mathcal{C}=\{c_1, \dots, c_R\}$ 클래스를 가진 데이터셋에 pretrain된 선생 모델 $\mathcal{F}_T$와 초기 합성 데이터셋 $\mathcal{D}_S$가 있을 때, 각 샘플을 선생모델에 통과시켜 예측값을 얻고, 이를 활용하여 KD에 사용할 최종 합성 데이터셋 $\mathcal{D}_K$를 얻음.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;Teacher에 의해 클래스 $c_i$로 예측된 이미지에 대해서 무작위로 $N_i$개의 샘플을 추출함.&lt;/li&gt;
&lt;li&gt;만약 특정 예측 클래스가 0이라면, 이를 Skip하고 다른 클래스에서 더 많은 샘플을 추출함.&lt;/li&gt;
&lt;li&gt;특정 클래스가 $N_i$보다 더 적게 예측되었다면, 해당 갯수를 채우기 위해 복제를 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Data Augmentation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;합성데이터는 필요하다면, 거의 무한에 가까운 데이터를 얻을 수 있지만, 큰 저장용량이 필요할뿐만 아니라, 새로운 합성 샘플들이 이미 존재하는 샘플들과 상당히 다른 샘플이라고 보장하지 않음.&lt;/li&gt;
&lt;li&gt;따라서, 데이터셋의 다양성을 증가시키기 위한 방법으로, 데이터 증강을 사용할 수 있음. 데이터 증강은 증류 중에 더 많은 샘플들을 만들 수 있을 뿐만 아니라, 선생 모델의 feature space을 더 탐험할 수 있도록 함.&lt;/li&gt;
&lt;li&gt;일반적인 지도학습에서 데이터 증강은 &lt;i&gt;label-preserving&amp;nbsp;&lt;/i&gt;해야 하지만, KD를 &lt;i&gt;function matching&lt;/i&gt;으로 생각하면, 라벨을 더이상 생각하지 않아도 됨. 따라서, 일반적인 지도학습에서는 고려하지 않는 다양한 데이터 증강을 적용할 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Knowledge Distillation&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Datasets &amp;amp; Networks&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Teacher을 학습시키기 위해, general purpose dataset (e.g., C10, C100, Tiny)과 fine-grained/domain-specific dataset (e.g., FGVA, Pets, EuroSAT)을 활용하고, distillation dataset으로는 이 6가지와 더불어, ImageNet-ID, ImageNet-OOD, Food, OpenGL shaders, Leaves, Noise를 사용함.&lt;/li&gt;
&lt;li&gt;CIFAR10/100 trained teacher: ResNet50&amp;nbsp;&amp;rarr; ResNet18 / WRN-40-2; Others: ResNet50 &lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;&amp;rarr;&lt;span&gt; ResNet18 / MV2&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Training Details&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Teacher: Data augmentation으로 RandAugment $(n=2, m=14)$, random horizontal flipping, random copping with padding을 사용함.&lt;/li&gt;
&lt;li&gt;Distillation: Real 샘플의 경우, Teacher 모델을 학습할 때와 동일한 data augmentation을 사용하고, 합성 샘플의 경우, 더 강한 data augmentation인 RandAugment $(n=4, m=14)$, random elastic, random inversion transforms을 추가로 적용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Results&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1570&quot; data-origin-height=&quot;614&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/AsydS/btsQfbQUa5D/gqwtj8B6cEA03ageeHuMo0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/AsydS/btsQfbQUa5D/gqwtj8B6cEA03ageeHuMo0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/AsydS/btsQfbQUa5D/gqwtj8B6cEA03ageeHuMo0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FAsydS%2FbtsQfbQUa5D%2Fgqwtj8B6cEA03ageeHuMo0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1570&quot; height=&quot;614&quot; data-origin-width=&quot;1570&quot; data-origin-height=&quot;614&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Standard Knowledge Distillation&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;Does the distillation data need to be in-domain?&lt;/i&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;기존 데이터셋을 사용하는 것이 가장 뛰어난 성능을 기록하지만, 많은 real ID 와 OOD surrogate 데이터셋도 어느정도 뛰어난 성능을 기록함.&lt;/li&gt;
&lt;li&gt;Pets으로 학습된 teacher 결과에서는 surrogate (IN-ID, 50K samples)가 기존 데이터셋 (Pets, 3600 samples)보다 뛰어난 성능을 기록하며, FGVCA 데이터셋에서는 IN-OOD (50K samples)가 IN-ID (3900 samples)보다 뛰어난 성능을 기록함.&lt;/li&gt;
&lt;li&gt;학생을 더 길게 학습하면, alternative OOD 데이터셋을 사용해도 괜찮은 성능을 얻을 수 있지만, ID 데이터가 더 적은 샘플로도 선생 정보를 충분히 학습할 수 있음 (&lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;sample efficiency &amp;uarr;&lt;/span&gt;).&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;i&gt; Does the distillation data need to be real?&lt;/i&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;비자연스러운 합성 데이터셋을 활용함에도 불구하고, 지식증류가 어느정도 성공적으로 수행됨을 확인할 수 있음.&lt;/li&gt;
&lt;li&gt;다만, TinyImageNet 처럼 클래스의 수가 커지거나, FGVCA와 Pets 처럼 클래스가 더 fine-grained 하게 되면, 합성데이터셋은 더이상 좋은 성능을 내지 못함.&lt;/li&gt;
&lt;li&gt;Leaves가 noise 보다 더 나은 성능 개선을 보여주는데, 이는 Leaves 이미지에 포함되어 있는 &lt;i&gt;primitive&lt;/i&gt; 특성(lines and corners)으로 인한 것으로 생각되며, OpenGL shaders는 Leaves에 비해 더많은 다양성과 texture를 포함하고 있기 때문에 더 큰 성능 개선을 보여줌.&lt;/li&gt;
&lt;li&gt;즉, 데이터셋이 꼭 실제일 필요는 없으며, 비자연스러운 합성 데이터셋을 사용해도 지식을 충분히 전달할 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;i&gt;How&amp;nbsp;does&amp;nbsp;the&amp;nbsp;teacher&amp;nbsp;architecture&amp;nbsp;influence&amp;nbsp;what&amp;nbsp;distillation&amp;nbsp;datasets&amp;nbsp;are&amp;nbsp;viable?&lt;/i&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;653&quot; data-origin-height=&quot;363&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eVRc7P/btsQcpC7seB/zWGaWtEB3XZcy3Uqjgtvhk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eVRc7P/btsQcpC7seB/zWGaWtEB3XZcy3Uqjgtvhk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eVRc7P/btsQcpC7seB/zWGaWtEB3XZcy3Uqjgtvhk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FeVRc7P%2FbtsQcpC7seB%2FzWGaWtEB3XZcy3Uqjgtvhk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;508&quot; height=&quot;282&quot; data-origin-width=&quot;653&quot; data-origin-height=&quot;363&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Teacher의 구조는 KD의 속도에 큰 영향을 미침. 즉, Teahcer가 구조적으로 더 복잡하고 높은 성능을 내는 모델이라면 &lt;i&gt;patient &lt;/i&gt;distillation [1]가 요구됨. 하지만, Teacher의 구조가 지식증류를 위한 특정 데이터셋 사용 가능성에 영향을 미치지 않는 것으로 보임.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;What Influences Successful Distillation?&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;752&quot; data-origin-height=&quot;357&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/4iqEM/btsQgeTTpiY/1iRthwqUQr0rrjfKQ12Bsk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/4iqEM/btsQgeTTpiY/1iRthwqUQr0rrjfKQ12Bsk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/4iqEM/btsQgeTTpiY/1iRthwqUQr0rrjfKQ12Bsk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F4iqEM%2FbtsQgeTTpiY%2F1iRthwqUQr0rrjfKQ12Bsk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;558&quot; height=&quot;265&quot; data-origin-width=&quot;752&quot; data-origin-height=&quot;357&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;어떤 요소가 특정 데이터셋이 다른 데이터셋보다 더 뛰어난 성능을 만들게 하는지를 분석하기 위해, Teacher 모델이 각 클래스를 얼마나 예측했는지(class prediction histogram)로부터 relative entropy를 계산함.&lt;/li&gt;
&lt;li&gt;가장 좋은 성능을 기록하는 데이터셋일수록 relative entropy가 1에 가까우며, 이는 teacher 모델이 모든 클래스를 균일하게 예측한다는 것을 의미함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;762&quot; data-origin-height=&quot;395&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bqJeJP/btsQdXZ4OV4/lFnVHznAZe2dwzIkwkHA60/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bqJeJP/btsQdXZ4OV4/lFnVHznAZe2dwzIkwkHA60/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bqJeJP/btsQdXZ4OV4/lFnVHznAZe2dwzIkwkHA60/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbqJeJP%2FbtsQdXZ4OV4%2FlFnVHznAZe2dwzIkwkHA60%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;553&quot; height=&quot;287&quot; data-origin-width=&quot;762&quot; data-origin-height=&quot;395&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;OpenGL shader image를 활용할 때는, temperature-scaled softmax outputs을 사용하는 것이 one-hot 또는 label smoothing보다 더 높은 성능을 기록함.&lt;/li&gt;
&lt;li&gt;이는 OpenGL과 같은 OOD data를 사용하여 증류를 할때는, nearby decision boundaries와 클래스간의 관계를 이해하는 것이 특히 중요하다는 것을 보여줌.&lt;/li&gt;
&lt;li&gt;Mixup을 사용하면, Long tail과 balanced 실험에서 성능차이가 미비함. 즉, teacher 모델 예측이 균일하지 않아도 어느정도 괜찮음. 이는 raw sample의 수와 품질이 부적절할 때, mixup을 통해 teacher feature space의 많은 부분을 커버할 수 있기 때문임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;759&quot; data-origin-height=&quot;379&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cVrA83/btsQgcaHzdk/a3KU2nbQcNbvTmjvDLNWK0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cVrA83/btsQgcaHzdk/a3KU2nbQcNbvTmjvDLNWK0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cVrA83/btsQgcaHzdk/a3KU2nbQcNbvTmjvDLNWK0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcVrA83%2FbtsQgcaHzdk%2Fa3KU2nbQcNbvTmjvDLNWK0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;563&quot; height=&quot;281&quot; data-origin-width=&quot;759&quot; data-origin-height=&quot;379&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;과도한 데이터 증강을 사용하면, CIFAR10이 OOD 데이터셋 성능과 유사하게 됨.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;Tables 3-5를 통해서, KD is a &lt;i&gt;task&amp;nbsp;of&amp;nbsp;function&amp;nbsp;matching&amp;nbsp;&lt;/i&gt;and&amp;nbsp;&lt;i&gt;sufficient&amp;nbsp;sampling&amp;nbsp;of&amp;nbsp;the&amp;nbsp;teacher&lt;/i&gt;.&lt;/li&gt;
&lt;li&gt;하지만, 모덴 데이터셋이 동일 수준의 샘플링 효율성을 보이는 것은 아님. ID 데이터는 OOD 데이터보다 더 나은 샘플 효율성을 보이며, 원본 데이터가 모든 데이터 중에서 가장 높은 샘플 효율성을 가짐.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;761&quot; data-origin-height=&quot;539&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/BdDq3/btsQeTJJWCZ/QZnKmRA509ylrIdokhnfKK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/BdDq3/btsQeTJJWCZ/QZnKmRA509ylrIdokhnfKK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/BdDq3/btsQeTJJWCZ/QZnKmRA509ylrIdokhnfKK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FBdDq3%2FbtsQeTJJWCZ%2FQZnKmRA509ylrIdokhnfKK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;565&quot; height=&quot;400&quot; data-origin-width=&quot;761&quot; data-origin-height=&quot;539&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;OpenGL shader 데이터셋은 교사의 모든 클래스 영역에서 예측된 샘플을 가지고 있는 (1에 가까운 relative entropy) 반면, CIFAR10 데이터는 클래스의 일부만 커버함 (0에 가까운 relative entropy).&lt;/li&gt;
&lt;li&gt;즉, OpenGL shader student는 MNIST student와 유사한 decision boundaries를 얻어 CIFAR10보다 더 높은 성능을 기록함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Adding Teacher Exploitation&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;766&quot; data-origin-height=&quot;382&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cT7hPo/btsQdmZ84Vo/Qlpv05DjCTaluHXLY1M181/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cT7hPo/btsQdmZ84Vo/Qlpv05DjCTaluHXLY1M181/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cT7hPo/btsQdmZ84Vo/Qlpv05DjCTaluHXLY1M181/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcT7hPo%2FbtsQdmZ84Vo%2FQlpv05DjCTaluHXLY1M181%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;533&quot; height=&quot;266&quot; data-origin-width=&quot;766&quot; data-origin-height=&quot;382&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;758&quot; data-origin-height=&quot;424&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/clJvzK/btsQdTqfAMA/NNmN3mkeFPO4dO7P9ZPP9k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/clJvzK/btsQdTqfAMA/NNmN3mkeFPO4dO7P9ZPP9k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/clJvzK/btsQdTqfAMA/NNmN3mkeFPO4dO7P9ZPP9k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FclJvzK%2FbtsQdTqfAMA%2FNNmN3mkeFPO4dO7P9ZPP9k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;560&quot; height=&quot;313&quot; data-origin-width=&quot;758&quot; data-origin-height=&quot;424&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Surrogate data를 사용하여 지식증류를 할때, decision boudnary information이 KD에 영향을 미침. 즉, 만약 특정 데이터셋이 KD에 좋지 않다면, adversarial attacks을 통해서 샘플의 minor perturbations을 넣어 이를 극복할 수 있음.&lt;/li&gt;
&lt;li&gt;Adversarial attack을 통해서 decision boundary aware dataset을 만들 수 있고, 이를 통해 전체적으로 더 높은 성능을 얻을 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Comparisons to Other Data Sources&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;762&quot; data-origin-height=&quot;566&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/PuTW1/btsQffTjXdb/ZZ1LGrpQBGDxrOSednKuR1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/PuTW1/btsQffTjXdb/ZZ1LGrpQBGDxrOSednKuR1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/PuTW1/btsQffTjXdb/ZZ1LGrpQBGDxrOSednKuR1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPuTW1%2FbtsQffTjXdb%2FZZ1LGrpQBGDxrOSednKuR1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;561&quot; height=&quot;417&quot; data-origin-width=&quot;762&quot; data-origin-height=&quot;566&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;상당한 computational overhead가 필요한 generator network가 없어도, 충분한 성능을 얻을 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;KD is a &lt;b&gt;sufficient sampling problem&lt;/b&gt; that requires the teacher&amp;rsquo;s outputs and decision spaces be equally and thoroughly explored.&lt;/li&gt;
&lt;li&gt;It is actually possible to distill many different teacher models using &lt;b&gt;unnatural synthetic imagery&lt;/b&gt; in the form of OpenGL shader images.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Adversarial perturbation strategy&lt;/b&gt; that can improve the knowledge transfer was proposed.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Data-Free Knowledge Distillation</category>
      <category>adversarial_attack</category>
      <category>computer_vision</category>
      <category>dataset</category>
      <category>data_free_knowledge_distillation</category>
      <category>knowledge_distillation</category>
      <category>sampling_problem</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/54</guid>
      <comments>https://hakk35.tistory.com/54#entry54comment</comments>
      <pubDate>Sun, 31 Aug 2025 21:34:17 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] ShiftKD: Benchmarking Knowledge Distillation under Distribution Shift</title>
      <link>https://hakk35.tistory.com/52</link>
      <description>&lt;script&gt; MathJax = { tex: {inlineMath: [['$', '$']]} }; &lt;/script&gt;
&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js&quot;&gt;&lt;/script&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of &quot;&lt;a href=&quot;https://arxiv.org/pdf/2312.16242&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;ShiftKD: Benchmarking Knowledge Distillation underDistribution Shift&lt;/a&gt;&quot; published in arXiv 2025.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Real-world에서는 훈련 데이터와 테스트 데이터 간의 분포 차이가 빈번하게 발생함. 따라서, Domain Shift에서 기존 KD 방법들의 신뢰성과 강건성을 확인해야 함.&lt;/li&gt;
&lt;li&gt;두 가지의 일반적인 분포 변화 유형(Diversity shift, Correlation shift)에서 다양한 KD 기법들을 평가하며, 이외에도 데이터 증강, 프루닝, 최적화 알고리즘에 따른 성능 변화를 분석함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;잘 학습된 대형 모델이 주어졌을 때, 분포 이동 상황에서도 성능 저하 없이 더 작고 강건한 구조로 압축하는 것이 필요함. 이를 위해 KD가 주목받고 있으나, 기존 방법들은 독립적이고 동일한 분포(i.i.d.)를 전제로 하고 있음.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;827&quot; data-origin-height=&quot;426&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/JnZWU/btsPzHXzAfS/ivyVCXAcd7eRPjfaF42WjK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/JnZWU/btsPzHXzAfS/ivyVCXAcd7eRPjfaF42WjK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/JnZWU/btsPzHXzAfS/ivyVCXAcd7eRPjfaF42WjK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJnZWU%2FbtsPzHXzAfS%2FivyVCXAcd7eRPjfaF42WjK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;827&quot; height=&quot;426&quot; data-origin-width=&quot;827&quot; data-origin-height=&quot;426&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;훈련 환경에서는 깨끗하고 잘 정렬된 데이터가 주어지지만, 실제 depoyment environment에서는 Diversity shift와 Correlation shift가 나타날 수 있음.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;Diversity Shift: 실제 사진&amp;nbsp;&amp;rarr; 만화 스타일 이미지로의 스타일 변화&lt;/li&gt;
&lt;li&gt;Correlation Shift: 레이블-특징 간 연관성 변화&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;기존의 i.i.d. 가정과는 다른 분포 이동 상황에서 다양한 KD 기법을 평가함. 이를 통해, 기본적인 Vanilla KD 방법도 때로는 충분할 수 있다는 것을 보여주며, 분포 이동 하에서 dark knowledge 및 데이터 증강의 효과 급감 현상을 밝힘.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;ShiftKD: Framework to Evaluate Knowledge Distillation to Distribution Shift&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Preliminaries&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Knowledge Distillation (KD)&lt;/b&gt;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; KD under distribution shift (non-i.i.d. case)&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Non-i.i.d. 상황에서는, 유사하지만 서로 다른 $K$개의 훈련 도메인들 $\mathcal{D}_{\text{tr}} = \left\{ \mathcal{D}_e = (X_e, Y_e) \right\}_{e=1}^{K}$이 주어지며, 각 도메인은 서로 다른 데이터 분포 $P^e_{XY}$를 따름.&lt;/li&gt;
&lt;li&gt;분포 이동 상황에서의 KD 목표는 훈련 시 접근할 수 없는 테스트 환경 $\mathcal{D}_\text{te}$에서도 잘 동작할 수 있는 학생모델 $S(X; \theta_s)$를 구축하는 것임.&lt;/li&gt;
&lt;li&gt;선생 모델은 분포 이동이 반영된 데이터셋 $\mathcal{D}_\text{tr}$에 대해서 먼저 학습되고, 학생 모델에게 증류함. 이를 통해, 분포가 변화한 테스트셋 $\mathcal{D}_\text{te}$에 대해서 학생의 강건성을 확인함. &amp;rarr; 선생모델 자체가 강건하지 않더라도, KD를 통해 강건한 학생을 얻기를 원함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Framework Setting&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Transferable Knowledge algorithms&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;분포 이동 하에서, 어떤 종류의 지식이 학생이 선생을 잘 따라가도록 도울까?&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Distillation Data Manipulation&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;분포 이동 상황에서 KD의 강건성을 얻기 위해 어떤 데이터 전략을 선택해야 할까?&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Optimization option&lt;/b&gt;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt; Types of Distribution shift&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Benchmarking Details&lt;/b&gt;&lt;/h2&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Knowledge Transfer Algorithms&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1014&quot; data-origin-height=&quot;428&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bKC4wM/btsPAEMvRcY/rL3DRBjuklKBop9kiIZ2hk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bKC4wM/btsPAEMvRcY/rL3DRBjuklKBop9kiIZ2hk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bKC4wM/btsPAEMvRcY/rL3DRBjuklKBop9kiIZ2hk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbKC4wM%2FbtsPAEMvRcY%2FrL3DRBjuklKBop9kiIZ2hk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1014&quot; height=&quot;428&quot; data-origin-width=&quot;1014&quot; data-origin-height=&quot;428&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Data Manipulation Techniques&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;953&quot; data-origin-height=&quot;404&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Sr9F4/btsPAlT8RgQ/hdiXnH2u5HCuw14K61zeC1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Sr9F4/btsPAlT8RgQ/hdiXnH2u5HCuw14K61zeC1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Sr9F4/btsPAlT8RgQ/hdiXnH2u5HCuw14K61zeC1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FSr9F4%2FbtsPAlT8RgQ%2FhdiXnH2u5HCuw14K61zeC1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;953&quot; height=&quot;404&quot; data-origin-width=&quot;953&quot; data-origin-height=&quot;404&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Optimization Options&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;분포 이동 상황에서 KD 성능에 영향을 줄 수 있는 하이퍼파라미터, 사전학습, optimizer, 학생 모델 종류 등을 평가함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Shifted Datasets&lt;br /&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1040&quot; data-origin-height=&quot;757&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cxWLX1/btsPzTRa1bT/44K42VT7V1yFWCR3rWkGGK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cxWLX1/btsPzTRa1bT/44K42VT7V1yFWCR3rWkGGK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cxWLX1/btsPzTRa1bT/44K42VT7V1yFWCR3rWkGGK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcxWLX1%2FbtsPzTRa1bT%2F44K42VT7V1yFWCR3rWkGGK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1040&quot; height=&quot;757&quot; data-origin-width=&quot;1040&quot; data-origin-height=&quot;757&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Diversity shift와 Correlation shift의 이동 조건에서 KD 성능을 평가하기 위해 아래의 5가지 데이터셋을 선택함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;Diversity shift: OOD generalization에서 널리 사용되는 PACS, OfficeHome, DomainNet을 사용함.&lt;/li&gt;
&lt;li&gt;Correlation shift: ColorMNIST(색과 숫자간의 인위적 상관관계)와 CelebA-Blond(성별과 금발 여부간의 상관관계)를 사용함.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;이 다섯가지에 국한하지 않고, 대부분의 ODD 벤치마크 데이터셋을 활용할 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Evaluation Implementation&lt;/h3&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Evaluation Metrics&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Average Accuracy: 모든 도메인 환경에서 평균적으로 달성한 정확도&lt;/li&gt;
&lt;li&gt;Worst-Group Accuracy (WGA): 가장 낮은 성능을 보인 환경에서의 정확도; 분포 이동이 심한 환경에 대한 강건성을 판단하는 기준&lt;/li&gt;
&lt;li&gt;Expected Calibration Error (ECE): 모델의 예측 신뢰도와 실제 정확도 간의 차이를 측정하는 calibration 지표; 모델이 얼마나 overconfident 또는 underconfident하는지를 수치화&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Benchmarking Details&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;RQ1: Performance Across Distillation Algorithms&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1016&quot; data-origin-height=&quot;574&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bMelrz/btsPzkhxNtG/fBOWSrc3xw5do9I7NBIkI1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bMelrz/btsPzkhxNtG/fBOWSrc3xw5do9I7NBIkI1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bMelrz/btsPzkhxNtG/fBOWSrc3xw5do9I7NBIkI1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbMelrz%2FbtsPzkhxNtG%2FfBOWSrc3xw5do9I7NBIkI1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1016&quot; height=&quot;574&quot; data-origin-width=&quot;1016&quot; data-origin-height=&quot;574&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;KD를 통해 학습된 학생모델은 일반적인 특징에 초점을 맞추기 때문에 일반화 성능이 향상됨. 이는 학생모델이 선생모델보다 구조적으로 더 단순하게 설계되었기 때문임. 결과적으로, 분포 이동에서도 전반적인 성능향상을 가져옴.&lt;/li&gt;
&lt;li&gt;복잡한 KD 기법들이 항상 Vanilla KD보다 큰 이점을 제공하지는 않음.&lt;/li&gt;
&lt;li&gt;KD의 성능 개선 효과는 architectural compatibility에 매우 민감하기 때문에, 분포 이동 유형에 따라 KD 기법을 동적으로 조정할 필요가 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;975&quot; data-origin-height=&quot;665&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bJFdZW/btsPzKAhI0x/pq9YtVjH2KYd2fn3x5bRk1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bJFdZW/btsPzKAhI0x/pq9YtVjH2KYd2fn3x5bRk1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bJFdZW/btsPzKAhI0x/pq9YtVjH2KYd2fn3x5bRk1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbJFdZW%2FbtsPzKAhI0x%2Fpq9YtVjH2KYd2fn3x5bRk1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;975&quot; height=&quot;665&quot; data-origin-width=&quot;975&quot; data-origin-height=&quot;665&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Low-level knowledge는 분포 이동 상황에서 학생을 오히려 혼란스럽게 함. High-level semantic feature를 포함하는 마지막 layer를 사용하는 것이 가장 좋은 성능을 보임.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;복잡한 KD 기법의 성능 저하 원인은 전달된 특징과 실제 필요한 표현 간의 불일치로 설명됨. 분포 변화 상황에서 선생 모델의 신뢰할 수 없는 저수준 특징에 과도하게 의존하게 되면 학생의 성능 저하로 이어짐. 즉, 모든 계층에서 선생모델을 무작정 따라서는 안되며, 도메인에 독립적이고 의미있는 표현을 선별적으로 정렬해야 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1012&quot; data-origin-height=&quot;283&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dbD98I/btsPzHjjuvu/vLOYs7KNZ95oUWKLRTmKf0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dbD98I/btsPzHjjuvu/vLOYs7KNZ95oUWKLRTmKf0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dbD98I/btsPzHjjuvu/vLOYs7KNZ95oUWKLRTmKf0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdbD98I%2FbtsPzHjjuvu%2FvLOYs7KNZ95oUWKLRTmKf0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1012&quot; height=&quot;283&quot; data-origin-width=&quot;1012&quot; data-origin-height=&quot;283&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1006&quot; data-origin-height=&quot;509&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bR9EoC/btsPzh57Ocu/QPnNm1FaIaOAHkG1vIkznK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bR9EoC/btsPzh57Ocu/QPnNm1FaIaOAHkG1vIkznK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bR9EoC/btsPzh57Ocu/QPnNm1FaIaOAHkG1vIkznK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbR9EoC%2FbtsPzh57Ocu%2FQPnNm1FaIaOAHkG1vIkznK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1006&quot; height=&quot;509&quot; data-origin-width=&quot;1006&quot; data-origin-height=&quot;509&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;선생 모델이 bias를 가지고 있다면, 기존 KD 기법들은 이러한 편향을 학생에게 그대로 전달하게 되어, 학생모델의 성능 향상을 저해함.&lt;/li&gt;
&lt;li&gt;i.i.d 환경에서 유용했던 dark knowledge는 분포 이동 환경에서는 오히려 역효과를 낳을 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;RQ2: The Role of Distillation Data&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1019&quot; data-origin-height=&quot;451&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/pgrtZ/btsPzlHEL5d/WAzE69GKuv2MLYpqaZtER0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/pgrtZ/btsPzlHEL5d/WAzE69GKuv2MLYpqaZtER0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/pgrtZ/btsPzlHEL5d/WAzE69GKuv2MLYpqaZtER0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FpgrtZ%2FbtsPzlHEL5d%2FWAzE69GKuv2MLYpqaZtER0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1019&quot; height=&quot;451&quot; data-origin-width=&quot;1019&quot; data-origin-height=&quot;451&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;지식 증류에 사용할 데이터를 신중히 선택하는 것이 중요함. 데이터 조작을 통해, 학습 데이터를 유용하게 변형하여 이 데이터의 분포가 다양한 환경에서 공통적인 분포에 더 가까워지도록 해야함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;RQ3: Possible Causes on Training Options&lt;/h3&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Connecting KD to Information Theory&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;KD는 선생모델로부터 오는 유용한 정보만 골라서 사용해야 효과적임. 분포 이동 환경에서는 선생 모델의 잘못된 정보까지 따라하면 오히려 악영향을 미침. 따라서, KD에서도 정보를 선별할 필요가 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;KD가 분포 이동 상황에서도 강인한 경량 모델을 만드는데 중요한 역할을 하고 있음.&lt;/li&gt;
&lt;li&gt;기존의 복잡한 KD 기법들은 Vanilla KD에 비해 큰 개선을 보여주지 못했음. 따라서, 새로운 알고리즘을 개발할 필요가 있음.&lt;/li&gt;
&lt;li&gt;분포 이동 상황에서 학생 모델의 강인성을 향상시킬 새로운 데이터 기반 방법을 만드는 것이 유망한 연구 방향임.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Knowledge Distillation</category>
      <category>computer_vision</category>
      <category>domain_shift</category>
      <category>knowledge_distillation</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/52</guid>
      <comments>https://hakk35.tistory.com/52#entry52comment</comments>
      <pubDate>Sat, 26 Jul 2025 09:00:55 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] Dataset Condensation with Distribution Matching (DM)</title>
      <link>https://hakk35.tistory.com/51</link>
      <description>&lt;script&gt; MathJax = { tex: {inlineMath: [['$', '$']]} }; &lt;/script&gt;
&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js&quot;&gt;&lt;/script&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of &lt;/i&gt;&lt;/span&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/WACV2023/papers/Zhao_Dataset_Condensation_With_Distribution_Matching_WACV_2023_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Dataset&amp;nbsp;Condensation&amp;nbsp;with&amp;nbsp;Distribution&amp;nbsp;Matching&lt;/a&gt;&quot; presented at WACV 2023.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;DD를 통해 합성된 이미지로 모델을 빠르게 학습할 수 있지만, &lt;b&gt;이미지 생성 과정은 복잡한 bi-level optimization과 second-order derivatives computation 때문에 계산 비용이 매우 큼.&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;본 논문은 many sampled embedding spaces에서 합성 이미지와 원본 이미지의 &lt;b&gt;feature distribution을 일치시키는 방식으로 이미지를 합성하는, 최초의 distribution matching 기반 dataset distillation 방법&lt;/b&gt;을 제안함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;822&quot; data-origin-height=&quot;546&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/GIwQo/btsN8vdqAVG/xesjol6J2HG4GszYGKmb8K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/GIwQo/btsN8vdqAVG/xesjol6J2HG4GszYGKmb8K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/GIwQo/btsN8vdqAVG/xesjol6J2HG4GszYGKmb8K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGIwQo%2FbtsN8vdqAVG%2Fxesjol6J2HG4GszYGKmb8K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;482&quot; height=&quot;320&quot; data-origin-width=&quot;822&quot; data-origin-height=&quot;546&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;기존의 다양한 &lt;b&gt;dataset distillation&lt;/b&gt; 기법들은 일정 수준의 성능을 보이지만, 대부분 여전히 &lt;b&gt;비용이 큰 bi-level optimization&lt;/b&gt; 문제를 내포하고 있음.&lt;/li&gt;
&lt;li&gt;본 논문에서는 &lt;b&gt;bi-level optimization을 수행하지 않고도&lt;/b&gt;, &lt;b&gt;distribution matching&lt;/b&gt;을 통해 합성 데이터가 원본 데이터 분포를 다양한 &lt;b&gt;embedding space&lt;/b&gt; 상에서 정합되도록 최적화하는 방법을 제안함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;이를 위해 분포 간 거리 측정으로 &lt;b&gt;maximum mean discrepancy (MMD)를 사용&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;다양한 embedding space는 무작위로 초기화된 딥러닝 모델들을 샘플링함으로써 효율적으로 구성&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;이 방법은 &lt;b&gt;클래스별로 학습을 독립적으로 수행할 수 있으므로&lt;/b&gt;, &lt;b&gt;병렬 처리 및 계산 부하 분산&lt;/b&gt;이 가능하다는 장점이 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Methodology&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Condensation Problem&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Dataset distillation은 large-scale training set $\mathcal{T}$을 small synthetic set $\mathcal{S}$로 압축하는 방법으로, 아래의 식과 같이, $\mathcal{T}$와 $\mathcal{S}$에 학습된 모델이 unseen testing data에서 비슷한 성능을 내는 것을 목표로 함.&lt;br /&gt;$$ &lt;br /&gt;\mathbb{E}_{x&amp;nbsp;\sim&amp;nbsp;P_{\mathcal{D}}}&amp;nbsp;\left[&amp;nbsp;\ell\left(&amp;nbsp;\phi_{\theta^T}(x),&amp;nbsp;y&amp;nbsp;\right)&amp;nbsp;\right] &lt;br /&gt;\simeq &lt;br /&gt;\mathbb{E}_{x&amp;nbsp;\sim&amp;nbsp;P_{\mathcal{D}}}&amp;nbsp;\left[&amp;nbsp;\ell\left(&amp;nbsp;\phi_{\theta^S}(x),&amp;nbsp;y&amp;nbsp;\right)&amp;nbsp;\right], &lt;br /&gt;$$&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Existing Solutions&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Learning-to-learn problem 방식은 network parameters $\theta^\mathcal{S}$을 synthetic data $\mathcal{S}$의 함수로 정의하고, 원본데이터셋 $\mathcal{T}$에 대한 training loss $\mathcal{L}^\mathcal{T}$을 최소화하는 $\mathcal{S}$를 구함.&lt;br /&gt;$$ &lt;br /&gt;S^* = \arg\min_\mathcal{S} \mathcal{L}^\mathcal{T}\left(\theta^\mathcal{S}(\mathcal{S})\right)$$ $$&lt;br /&gt;\text{subject to} \quad \theta^\mathcal{S}(\mathcal{S}) = \arg\min_\theta \mathcal{L}^\mathcal{S}(\theta). &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;또 다른 방법으로, 합성 데이터와 실제 데이터에 대해 계산된 gradient를 matching하는 방법이 있음. 이 방법은 파라미터 $\theta$와 합성 데이터 $\mathcal{S}$를 번갈아 최적화하면서 다음의 목표를 최소화함.&lt;br /&gt;$$ &lt;br /&gt;\mathcal{S}^* = \arg\min_\mathcal{S} \mathbb{E}_{\theta_0 \sim P_{\theta_0}} \left[ \sum_{t=0}^{T-1} D\left( \nabla_\theta \mathcal{L}^\mathcal{S}(\theta_t), \nabla_\theta \mathcal{L}^\mathcal{T}(\theta_t) \right) \right] &lt;br /&gt;$$ $$ &lt;br /&gt;\text{subject to} \quad \theta_{t+1} \leftarrow \text{opt-alg}_\theta\left( \mathcal{L}^\mathcal{S}(\theta_t), \varsigma_\theta, \eta_\theta \right), &lt;br /&gt;$$&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Dilemma&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;위의 performance matching과 gradient matching 방법은 고비용의 bi-level optimization 과정을 포함함. 즉, inner loop에서는 모델 $\theta$을 최적화하고, outer loop에서는 &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;second-order derivative computation을 포함하는 합성 데이터 $\mathcal{S}$를 최적화해야 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*모델 파라미터 $\theta$는 합성데이터 $\mathcal{S}$에 의해 영향을 받으므로, $\frac{\partial \mathcal{L}^\mathcal{T}(\theta^*(\mathcal{S}))}{\partial \mathcal{S}} = \frac{\partial \mathcal{L}^\mathcal{T}}{\partial \theta^*} \cdot \frac{\partial \theta^*}{\partial \mathcal{S}}$의 chain rule이 성립함. &lt;/span&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;여기서 $\theta^*$는 합성데이터 $\mathcal{S}$를 통해 정의된 $\mathcal{L}^\mathcal{S}$에 대해 gradient descent를 수행한 결과로, $\theta^* = \theta - \alpha \nabla_\theta \mathcal{L}^\mathcal{S}(\theta)$로 정의됨. &lt;/span&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;따라서, $\frac{\partial \theta^*}{\partial \mathcal{S}} = -\alpha \cdot \frac{\partial}{\partial \mathcal{S}} \nabla_\theta \mathcal{L}^\mathcal{S}(\theta) = -\alpha \cdot \nabla^2_{\theta, \mathcal{S}} \mathcal{L}^\mathcal{S}(\theta)$이므로, second-order derivative가 됨.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Condensation with Distribution Matching&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;훈련 이미지들은 일반적으로 high-dimensional하기 때문에 실제 분포를 추정하고 이를 근사하는 합성 데이터를 생성하는 것은 비용이 많이 들고 부정확&lt;/b&gt;함.&lt;/li&gt;
&lt;li&gt;대신, 본 논문의 방법은 &lt;b&gt;각 학습 이미지 $x\in\mathbb{R}^d$가, parametric function $\psi_\vartheta: \mathbb{R}^d \rightarrow \mathbb{R}^{d'}$를 통해 lower dimensional space로 embedding될 수 있다고 가정&lt;/b&gt;함.&amp;nbsp;
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;즉, 각 embedding function $\psi$는 입력 이미지에 대한 부분적인 해석을 제공하며, 이들의 조합은 전체적인 표현을 제공함.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Maximum mean discrepancy (MMD)를 통해서, 원본데이터와 합성 데이터 간의 분포 차이를 측정&lt;/b&gt;할 수 있음.&lt;br /&gt;$$ &lt;br /&gt;\sup_{\|\psi_{\vartheta}\|_{\mathcal{H}} \leq 1} \left( \mathbb{E}[\psi_{\vartheta}(\mathcal{T})] - \mathbb{E}[\psi_{\vartheta}(\mathcal{S})] \right)&lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;Ground-truth data 분포에 접근할 수 없으므로, 아래의 MMD의 empirical estimate를 사용함.&lt;br /&gt;$$ &lt;br /&gt;\mathbb{E}_{\vartheta&amp;nbsp;\sim&amp;nbsp;P_{\vartheta}}&amp;nbsp;\left\|&amp;nbsp; &lt;br /&gt;\frac{1}{|\mathcal{T}|} \sum_{i=1}^{|\mathcal{T}|} \psi_{\vartheta}(x_i) -&amp;nbsp; &lt;br /&gt;\frac{1}{|\mathcal{S}|} \sum_{j=1}^{|\mathcal{S}|} \psi_{\vartheta}(s_j)&amp;nbsp; &lt;br /&gt;\right\|^2 &lt;br /&gt;$$&lt;br /&gt;
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;$P_\vartheta$는 네트워크 파라미터의 분포임.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;이전 연구에서 적용한, 미분가능한 Siamese augmentation $\mathcal{A}(\cdot, \omega)$를 실제 데이터와 합성데이터에 모두 활용하여 최종적인 optimization 문제로 정의하면 다음과 같음.&lt;br /&gt;$$ &lt;br /&gt;\min_\mathcal{S} \mathbb{E}_{\vartheta \sim P_{\vartheta}, \omega \sim \Omega}&amp;nbsp; &lt;br /&gt;\left\| \frac{1}{|\mathcal{T}|} \sum_{i=1}^{|\mathcal{T}|} \psi_{\vartheta}(\mathcal{A}(x_i, \omega)) - \frac{1}{|\mathcal{S}|} \sum_{j=1}^{|\mathcal{S}|} \psi_{\vartheta}(\mathcal{A}(s_j, \omega)) \right\|^2 &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;이를 통해, 다양한 embedding space (다양한 $\vartheta$)에서 두 분포 차이를 최소화하여 합성 데이터 $\mathcal{S}$를 학습함. 위의 식은, &lt;b&gt;모델 파라미터를 전혀 학습할 필요 없이 오직 $\mathcal{S}$만을 최적화하므로, bi-level optimization을 피할 수 있음.&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;본 논문은 이미지 분류 문제를 대상으로 하기 때문에, 같은 클래스 내에서 분포 차이를 최소화함. 또한, 모든 실제 학습 샘플은 레이블을 갖고 있으며, 합성 샘플에도 고정된 레이블을 부여함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Training Algorithm&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1370&quot; data-origin-height=&quot;420&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d44XB5/btsN8Z6HRf5/XosFNUfMT00k5EPZE8sJO0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d44XB5/btsN8Z6HRf5/XosFNUfMT00k5EPZE8sJO0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d44XB5/btsN8Z6HRf5/XosFNUfMT00k5EPZE8sJO0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd44XB5%2FbtsN8Z6HRf5%2FXosFNUfMT00k5EPZE8sJO0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1370&quot; height=&quot;420&quot; data-origin-width=&quot;1370&quot; data-origin-height=&quot;420&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Discussion&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Randomly&amp;nbsp;Initialized&amp;nbsp;Networks&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;Embedding 함수 $\psi_\vartheta$의 집합은 다양한 방식으로 설계될 수 있음.&lt;/b&gt; 본 논문에서는 사전 학습된 네트워크(많은 계산 비용이 필요)에서 파라미터를 샘플링하는 대신, &lt;b&gt;무작위로 초기화된 딥러닝 모델을 여러 개 사용하는 방법을 선택&lt;/b&gt;함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;무작위로 초기화된 네트워크는 강력한 representation을 만들어 내며, 데이터의 &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;distance-preserving embedding을 수행함.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*같은 클래스의 샘플들은 가까이, 다른 클래스의 샘플들은 멀리 위치하도록 embedding&lt;/span&gt;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Connection&amp;nbsp;to&amp;nbsp;Gradient&amp;nbsp;Matching&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Distribution mathcing은 실제 이미지와 합성 이미지 batch의 평균 feature를 일치시키는 반면, gradient matching은 두 batch에서 계산된 평균 gradient를 일치시킴.&lt;/li&gt;
&lt;li&gt;Distribution mathcing은 모든 feature에 균등한 가중치를 주는 반면, gradient matching은 예측이 부정확한 샘플에 더 큰 가중치를 부여함.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Generative&amp;nbsp;Models&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;이미지 생성 기법은 실제처럼 보이는 이미지 생성을 목표로 하지만, dataset distillation은 데이터 효율적인 학습 샘플 생성을 목표로 함. 이미지를 현실적으로 보이도록 하는 제약은 데이터 효율성을 제한할 수 있음.&lt;/li&gt;
&lt;li&gt;기존 연구는 cGAN으로 생성된 이미지들이, 무작위로 선택한 실제 이미지보다 모델 학습에 더 안좋다는 것을 보여줌.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Comparison to the SOTA&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Competitors&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Coreset selection 중, Herding은 mean vector가 전체 데이터셋의 mean에 가까워지도록 샘플을 greedily 추가하는 방식&lt;/li&gt;
&lt;li&gt;Forgetting은 네트워크 학습 중 얼마나 자주 샘플이 학습되고 잊혀지는 지 계산하여 less forgetful 샘플은 제외하는 방식&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Peformance Comparision&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1223&quot; data-origin-height=&quot;444&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/blDXWL/btsN8I5lshn/nhZKyxs2wJojVzMRImMHzk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/blDXWL/btsN8I5lshn/nhZKyxs2wJojVzMRImMHzk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/blDXWL/btsN8I5lshn/nhZKyxs2wJojVzMRImMHzk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FblDXWL%2FbtsN8I5lshn%2FnhZKyxs2wJojVzMRImMHzk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1223&quot; height=&quot;444&quot; data-origin-width=&quot;1223&quot; data-origin-height=&quot;444&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Visualization&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;599&quot; data-origin-height=&quot;368&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bfiHY9/btsOafADoXc/D89A6DqU3pLslxbOt5eri0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bfiHY9/btsOafADoXc/D89A6DqU3pLslxbOt5eri0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bfiHY9/btsOafADoXc/D89A6DqU3pLslxbOt5eri0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbfiHY9%2FbtsOafADoXc%2FD89A6DqU3pLslxbOt5eri0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;418&quot; height=&quot;257&quot; data-origin-width=&quot;599&quot; data-origin-height=&quot;368&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1222&quot; data-origin-height=&quot;313&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/EIDM5/btsN9wXm0Z9/ue9OHnkbi8H1wgBZdDcQMK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/EIDM5/btsN9wXm0Z9/ue9OHnkbi8H1wgBZdDcQMK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/EIDM5/btsN9wXm0Z9/ue9OHnkbi8H1wgBZdDcQMK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FEIDM5%2FbtsN9wXm0Z9%2Fue9OHnkbi8H1wgBZdDcQMK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1222&quot; height=&quot;313&quot; data-origin-width=&quot;1222&quot; data-origin-height=&quot;313&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;각 방법들 (DC, DSA, DM)에 의해 학습된 이미지의 feature distribution을 추출하기 위해, 원본 학습데이터에 학습된 네트워크를 활용했음.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;DC와 DSA에 의한 합성 이미지는 실제 이미지 분포를 커버하지 못하지만, &lt;b&gt;DM에 의한 합성 이미지는 실제 이미지 분포를 잘 커버하고 있으며, outlier도 더 적음.&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Learning with Batch Normalization&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;545&quot; data-origin-height=&quot;165&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bHqEve/btsN9HYvNTR/baoURo6LHzxN6fcCYTRsO0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bHqEve/btsN9HYvNTR/baoURo6LHzxN6fcCYTRsO0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bHqEve/btsN9HYvNTR/baoURo6LHzxN6fcCYTRsO0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbHqEve%2FbtsN9HYvNTR%2FbaoURo6LHzxN6fcCYTRsO0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;453&quot; height=&quot;137&quot; data-origin-width=&quot;545&quot; data-origin-height=&quot;165&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;DSA&lt;/b&gt;에서는 작은 합성 데이터 세트의 경우, BN을 사용할 때 &lt;b&gt;정확한 평균과 표준편차 추정이 어렵고&lt;/b&gt;, 이를 &lt;b&gt;실제 데이터로 사전 설정하여 고정&lt;/b&gt;하면 오히려 &lt;b&gt;최적화가 불안정해지므로&lt;/b&gt;, IN이 더 좋은 성능을 보임.&lt;/li&gt;
&lt;li&gt;반면, &lt;b&gt;DM&lt;/b&gt;은 &lt;b&gt;모든 클래스에서 증강된 합성 데이터를 활용&lt;/b&gt;하여 &lt;b&gt;합성데이터의 실제 평균과 분산을 직접 추정&lt;/b&gt;할 수 있으므로, BN을 안정적으로 사용할 수 있고 &lt;b&gt;성능도 향상됨.&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Training Cost Comparison&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;595&quot; data-origin-height=&quot;263&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/B1w3H/btsOad3TX1X/UMucZlcogk6nqGXvnUELd1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/B1w3H/btsOad3TX1X/UMucZlcogk6nqGXvnUELd1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/B1w3H/btsOad3TX1X/UMucZlcogk6nqGXvnUELd1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FB1w3H%2FbtsOad3TX1X%2FUMucZlcogk6nqGXvnUELd1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;475&quot; height=&quot;210&quot; data-origin-width=&quot;595&quot; data-origin-height=&quot;263&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;DM은 bi-level optimization 방법인 DSA보다 훨씬 효율적&lt;/b&gt;임.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Learning Larger Synthetic Sets&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;390&quot; data-origin-height=&quot;262&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Tm1gp/btsN8sBM1lv/8jTx6YlZw0L6WmWPQ6RYv1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Tm1gp/btsN8sBM1lv/8jTx6YlZw0L6WmWPQ6RYv1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Tm1gp/btsN8sBM1lv/8jTx6YlZw0L6WmWPQ6RYv1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FTm1gp%2FbtsN8sBM1lv%2F8jTx6YlZw0L6WmWPQ6RYv1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;317&quot; height=&quot;213&quot; data-origin-width=&quot;390&quot; data-origin-height=&quot;262&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;DSA 같은 bi-level optimization 기반의 방법은 데이터셋이 커질수록 학습시간과 튜닝 비용이 매우 커지지만, &lt;b&gt;DM은 더 큰 합성 데이터셋에서도 효과적으로 학습&lt;/b&gt;할 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Cross-architecture&amp;nbsp;Generalization&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;593&quot; data-origin-height=&quot;237&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b2D6UU/btsOahFegLh/LabixwaAzwFU53eNpo6kW0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b2D6UU/btsOahFegLh/LabixwaAzwFU53eNpo6kW0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b2D6UU/btsOahFegLh/LabixwaAzwFU53eNpo6kW0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb2D6UU%2FbtsOahFegLh%2FLabixwaAzwFU53eNpo6kW0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;503&quot; height=&quot;201&quot; data-origin-width=&quot;593&quot; data-origin-height=&quot;237&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;Distribution matching으로 학습된 합성 이미지&lt;/b&gt;는 gradient matching으로 학습된 합성 이미지보다 보지 못한 구조에 대해 &lt;b&gt;더 나은 일반화 성능&lt;/b&gt;을 보임.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;ResNet&lt;/b&gt;과 같은 &lt;b&gt;복잡한 아키텍처로 합성 데이터를 학습&lt;/b&gt;할 경우, 해당 합성 데이터가 그 아키텍처에 과도하게 fitting되어 다른 아키텍처에는 존재하지 않는 bias를 포함하게 되고,이로 인해 &lt;b&gt;타 아키텍처에서 성능이 하락&lt;/b&gt;함 (마지막 row).&lt;/li&gt;
&lt;li&gt;또한, &lt;b&gt;같은 합성 데이터를 더 복잡한 아키텍처에서 평가&lt;/b&gt;할 때도 성능이 더 낮게 나타나는데 (마지막 column), 이는 &lt;b&gt;작은 합성 데이터만으로는 복잡한 모델이 충분히 학습되지 못해 underfitting&lt;/b&gt;이 발생하기 때문임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 논문은 &lt;b&gt;distribution matching에 기반한 최초의 dataset distillation 방법&lt;/b&gt;을 제안함. 이 방법은 &lt;b&gt;bi-level optimization이 필요 없어 매우 효율적이며&lt;/b&gt;, &lt;b&gt;대규모 또는 복잡한 데이터셋에도 적용 가능&lt;/b&gt;하고, &lt;b&gt;클래스당 수백~수천 장 규모의 합성 데이터셋도 학습할 수 있음.&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Dataset Distillation</category>
      <category>computer_vision</category>
      <category>dataset_distillation</category>
      <category>distribution_matching</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/51</guid>
      <comments>https://hakk35.tistory.com/51#entry51comment</comments>
      <pubDate>Fri, 23 May 2025 11:15:19 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] Dataset Distillation by Matching Training Trajectories (MTT)</title>
      <link>https://hakk35.tistory.com/50</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;script&gt; MathJax = { tex: {inlineMath: [['$', '$']]} }; &lt;/script&gt;
&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js&quot;&gt;&lt;/script&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of &lt;/i&gt;&lt;/span&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://openaccess.thecvf.com/content/CVPR2022W/VDU/papers/Cazenavette_Dataset_Distillation_by_Matching_Training_Trajectories_CVPRW_2022_paper.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Dataset&amp;nbsp;Distillation&amp;nbsp;by&amp;nbsp;Matching&amp;nbsp;Training&amp;nbsp;Trajectories&lt;/a&gt;&quot; presented at CVPR 2022.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;합성데이터를 학습할 때, &lt;b&gt;모델의 파라미터가 실제 데이터로 학습했을 때의 파라미터 궤적과 유사한 경로&lt;/b&gt;를 따르도록 설계함.&lt;/li&gt;
&lt;li&gt;이를 위해, &lt;b&gt;실제 데이터로 사전 학습된 전문가 네트워크의 학습 궤적(trajectory)을 미리 계산하고 저장&lt;/b&gt;함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;793&quot; data-origin-height=&quot;940&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dWvrX1/btsN5LNdv4M/42IszgKhCdrhl6Koyw9Ks0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dWvrX1/btsN5LNdv4M/42IszgKhCdrhl6Koyw9Ks0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dWvrX1/btsN5LNdv4M/42IszgKhCdrhl6Koyw9Ks0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdWvrX1%2FbtsN5LNdv4M%2F42IszgKhCdrhl6Koyw9Ks0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;565&quot; height=&quot;670&quot; data-origin-width=&quot;793&quot; data-origin-height=&quot;940&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;기존 연구는 주로 낮은 해상도의 데이터셋 (e.g., MNIST, CIFAR)에만 국한&lt;/b&gt;되고, 다음의 한계가 존재함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;여러 반복을 &lt;b&gt;unroll하는 과정에서 학습 불안정성 발생&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;막대한 연산 및 메모리 자원이 요구&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;실제 데이터의 한 학습 스텝을 합성 데이터의 한 스텝으로 맞추는 방식을 사용하여, &lt;b&gt;평가 시 여러 스텝을 적용하면 오차가 누적&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;본 연구는 &lt;b&gt;합성 데이터로 훈련된 파라미터 변화 궤적의 일부 구간을, 실제 데이터로 훈련된 전문가 궤적의 동일 구간과 일치시키도록 설계&lt;/b&gt;함. 이를 통해, 단기적인 스텝 매칭이나 전체 궤적 모델링과 같은 어려운 최적화 문제를 피할 수 있음.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1638&quot; data-origin-height=&quot;747&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oOFuq/btsN33WazMn/r8LM7Dnvi3xSjUyZTn3l4k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oOFuq/btsN33WazMn/r8LM7Dnvi3xSjUyZTn3l4k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oOFuq/btsN33WazMn/r8LM7Dnvi3xSjUyZTn3l4k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FoOFuq%2FbtsN33WazMn%2Fr8LM7Dnvi3xSjUyZTn3l4k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1638&quot; height=&quot;747&quot; data-origin-width=&quot;1638&quot; data-origin-height=&quot;747&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;br /&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실제 데이터로 여러 개의 모델을 학습하고 전문가 궤적을 저장&lt;/li&gt;
&lt;li&gt;무작위로 선택한 전문가 궤적의 무작위 시점 파라미터로 모델을 초기화&lt;/li&gt;
&lt;li&gt;해당 모델을 합성 데이터로 여러번 학습시킨 뒤, 전문가 궤적의 파라미터와 얼마나 일치하는 지를 손실로 계산하고, 역전파를 통해 합성데이터를 업데이트&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;해당 방법은 표준 데이터셋 (e.g., CIFAR-100, TinyImagenet)뿐만 아니라, &lt;b&gt;고해상도 데이터셋 (e.g., ImageNet)에도 적용 가능한 최초의 방법&lt;/b&gt;임.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Method&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Expert Trajectories&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;합성 데이터로 훈련된 파라미터 $\hat{\theta}_t$ 궤적이 실제 데이터로 유도된 궤적 (i.e., &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;전문가 궤적 $\tau^*$)과 유사하도록 합성데이터를 만듦.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;전문가 궤적은 실제 데이터셋으로 여러 개의 네트워크를 학습시키고, 각 epoch 마다 파라미터를 저장하여 얻을 수 있으므로, 증류 전에 미리 계산해둘 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*원본 데이터셋을 사용해 네트워크를 학습할 때 생성되는 파라미터의 시간적 순서 $\{\theta_t^*\}_{0}^{T}$를 의미&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Long-Range Parameter Matching&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;801&quot; data-origin-height=&quot;902&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/rwwnR/btsN5eWPDVO/jeGiQ4pJeQ7Lzy5rZq4jVk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/rwwnR/btsN5eWPDVO/jeGiQ4pJeQ7Lzy5rZq4jVk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/rwwnR/btsN5eWPDVO/jeGiQ4pJeQ7Lzy5rZq4jVk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FrwwnR%2FbtsN5eWPDVO%2FjeGiQ4pJeQ7Lzy5rZq4jVk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;610&quot; height=&quot;687&quot; data-origin-width=&quot;801&quot; data-origin-height=&quot;902&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;각 증류 단계에서, &lt;b&gt;전문가 궤적의 임의 시점 파라미터 $\theta^*_t$를 샘플링하여 학생 파라미터를 초기화함 $\hat{\theta}_t = \theta_t^*$&lt;/b&gt;. 이때, 후반부 궤적은 파라미터 변화가 작아 유익한 신호가 적기 때문에, 최대 시점 $T^+$를 설정해 해당 시점 이후는 제외함.&lt;/li&gt;
&lt;li&gt;합성 데이터 $\mathcal{D}_{\text{syn}}$를 활용해, &lt;b&gt;초기화된 학생 네트워크를 $N$번 gradient descent 업데이트&lt;/b&gt; 함.&lt;br /&gt;$$ &lt;br /&gt;\hat{\theta}_{t+n+1}&amp;nbsp;=&amp;nbsp;\hat{\theta}_{t+n}&amp;nbsp;-&amp;nbsp;\alpha&amp;nbsp;\nabla&amp;nbsp;\ell(\mathcal{A}(\mathcal{D}_{\text{syn}});&amp;nbsp;\hat{\theta}_{t+n}), &lt;br /&gt;$$ &lt;br /&gt;
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;여기서, $\mathcal{A}$는 이전 연구에서 사용된 &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;미분 가능한 augmentation 기법이고, $\alpha$는 학습가능한 learning rate임.&lt;/li&gt;
&lt;li&gt;역전파를 통해 합성 데이터에 손실을 전달해야 하므로 $\mathcal{A}$는 반드시 미분 가능해야함.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;이후, &lt;b&gt;전문가 궤적에서 $t$ 시점으로부터 $M$ 스텝 이후의 파라미터 $\theta^*_{t+M}$를 가져와 학생 네트워크의 업데이트된 파라미터 $\hat{\theta}_{t+N}$와 비교&lt;/b&gt;함. 이때, weight matching loss는 다음과 같이, normalized squared $L_2$임.&lt;br /&gt;$$ &lt;br /&gt;\mathcal{L}&amp;nbsp;=&amp;nbsp;\frac{\left\|&amp;nbsp;\hat{\theta}_{t+N}&amp;nbsp;-&amp;nbsp;\theta_{t+M}^*&amp;nbsp;\right\|_2^2}{\left\|&amp;nbsp;\theta_t^*&amp;nbsp;-&amp;nbsp;\theta_{t+M}^*&amp;nbsp;\right\|_2^2} &lt;br /&gt;$$
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;Expert distance $ \theta_t^* - \theta_{t+M}^* $로 정규화함으로써, &lt;span style=&quot;color: #006dd7;&quot;&gt;*&lt;/span&gt;궤적 후반부처럼 변화량이 적은 구간에서도 강한 신호를 얻을 수 있음.&lt;/li&gt;
&lt;li&gt;또한, 이 정규화는 neurons간 또는 layers간의 크기 차이도 &lt;span style=&quot;color: #009a87;&quot;&gt;*&lt;/span&gt;self-calibration하는 효과가 있음.&lt;/li&gt;
&lt;li&gt;Cosine distance나 logit matching도 실험적으로 시도되었지만, $L_2$ 손실이 안정적이고 성능이 좋았음.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;최종적으로, 이 손실 $\mathcal{L}$을 &lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;$N$&lt;span&gt;개의 &lt;b&gt;업데이트 과정 전체를 따라 역전파하여, &lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;b&gt;합성 이미지의 픽셀과 learning rate $\alpha$를 동시에 최적화&lt;/b&gt;함.&lt;/li&gt;
&lt;li&gt;이때, 학습 가능한 $\alpha$를 최적화하는 것은, 학생과 전문가의 update 횟수 $(N, M)$를 고정해두고도, 학생의 학습 궤적이 전문가 궤적을 효과적으로 따라가도록 update 크기를 자동으로 조절하는 역할을 함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*증류 과정에서는 실제 데이터가 전혀 사용되지 않고, 합성 데이터에만 증강을 적용하므로 &lt;i&gt;Siamese &lt;/i&gt;augmentation은 필요 없음. &lt;br /&gt;하지만, 전문가 궤적을 생성할 때 적용한 증강 기법과 일치시켜야 함.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;*전문가 궤적의 변화가 거의 없으면 즉, $\theta_t^* - \theta_{t+M}^*$가 매우 작은 값을 가지기 때문에 학생 파라미터와 전문가 파라미터 간의 차이가 작더라도 (즉, 분자가 작더라도), 궤적 변화량 대비 상대 오차로 계산되기 때문에, 역전파 신호가 강해짐.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #009a87;&quot;&gt;*각 레이어나 뉴런마다 파라미터 크기가 다르기 때문에 단순 $L_2$ 손실을 적용하면, 크기가 큰 레이어에 학습이 편항되게 됨. 전문가가 이동한 전체거리 $ \theta_t^* - \theta_{t+M}^* $는 파라미터 전체의 누적 변화량을 나타내므로, 이를 활용해 정규화를 하면, 큰 파라미터에 과도하게 편향되지 않음.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Memory Constraints&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;각 최적화 단계마다 모든 클래스의 모든 모든 이미지를 동시에 최적화&lt;/b&gt;해야 하므로, 합성 데이터셋의 크기가 커질수록 &lt;b&gt;메모리 소비가 심각한 문제&lt;/b&gt;가 됨.&lt;/li&gt;
&lt;li&gt;이전 방법들은 한 번에 하나의 클래스만 증류하여 메모리 사용을 줄였지만, &lt;b&gt;trajectory matching에서는 전문가 궤적이 다중 클래스를 동시에 학습한 모델에서 생성&lt;/b&gt;되므로, &lt;b&gt;클래스별 증류 전략이 적절하지 않음.&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;각 distillation step마다 새로운 mini-batch를 샘플링&lt;/b&gt;하여 (outer loop, Algorithm 1 Line 3) 최적화하면 &lt;b&gt;메모리 부담을 줄일 수는 있으나, 중복된 정보가 여러 합성 이미지에 증류되어, 합성 이미지들이 유사해지는 catastrophic mode collapse가 발생&lt;/b&gt;할 수 있음.&lt;/li&gt;
&lt;li&gt;대신, &lt;b&gt;학생 네트워크의 각 업데이트마다&lt;/b&gt; (inner loop, Algorithm 1 Line 10) &lt;b&gt;새로운 mini-batch $b$를 샘플링&lt;/b&gt;함. 이렇게 하면 &lt;b&gt;최종 weight matching loss를 계산할 시점에는, 모든 합성 이미지가 한번 씩 학습에 사용되었을 것이 보장&lt;/b&gt;됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Low-Resolution Data&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1797&quot; data-origin-height=&quot;506&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/csyV5o/btsN7ZZ32RW/PDD5XnhlUl0kmrKkma37t1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/csyV5o/btsN7ZZ32RW/PDD5XnhlUl0kmrKkma37t1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/csyV5o/btsN7ZZ32RW/PDD5XnhlUl0kmrKkma37t1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcsyV5o%2FbtsN7ZZ32RW%2FPDD5XnhlUl0kmrKkma37t1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1797&quot; height=&quot;506&quot; data-origin-width=&quot;1797&quot; data-origin-height=&quot;506&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;878&quot; data-origin-height=&quot;1115&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/7gDia/btsN78WMgU4/5wWhIhGczuK5CIF32tHHpK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/7gDia/btsN78WMgU4/5wWhIhGczuK5CIF32tHHpK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/7gDia/btsN78WMgU4/5wWhIhGczuK5CIF32tHHpK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F7gDia%2FbtsN78WMgU4%2F5wWhIhGczuK5CIF32tHHpK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;471&quot; height=&quot;598&quot; data-origin-width=&quot;878&quot; data-origin-height=&quot;1115&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;클래스당 합성 이미지를 1장으로 제한&lt;/b&gt;하면, 클래스를 구별할 수 있는 &lt;b&gt;모든 정보를 단 1장의 샘플에 압축&lt;/b&gt;시켜야 함. 반면, &lt;b&gt;더 많은 이미지를 허용&lt;/b&gt;하면, 클래스를 구별하는 특징들을 &lt;b&gt;여러 이미지에 나누어 분산&lt;/b&gt;시킬수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Cross-Architecture Generalization&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;887&quot; data-origin-height=&quot;335&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b7QpF4/btsN7usVrRH/r4zvgL8ovcrXM9SKhnTLVK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b7QpF4/btsN7usVrRH/r4zvgL8ovcrXM9SKhnTLVK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b7QpF4/btsN7usVrRH/r4zvgL8ovcrXM9SKhnTLVK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb7QpF4%2FbtsN7usVrRH%2Fr4zvgL8ovcrXM9SKhnTLVK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;426&quot; height=&quot;161&quot; data-origin-width=&quot;887&quot; data-origin-height=&quot;335&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Short-Range vs. Long-Range Matching&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;863&quot; data-origin-height=&quot;566&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/LRGbP/btsN76x1FF1/HewyyYxw8MocBllaxvXF8K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/LRGbP/btsN76x1FF1/HewyyYxw8MocBllaxvXF8K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/LRGbP/btsN76x1FF1/HewyyYxw8MocBllaxvXF8K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FLRGbP%2FbtsN76x1FF1%2FHewyyYxw8MocBllaxvXF8K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;458&quot; height=&quot;300&quot; data-origin-width=&quot;863&quot; data-origin-height=&quot;566&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Short-range matching (e.g., $N = 1$ 및 작은 $M$)은 &lt;b&gt;일반적으로 long-range matching보다 낮은 성능&lt;/b&gt;을 보임.&lt;/li&gt;
&lt;li&gt;Short-range matching 기반 방법인 DSA는 short-range behavior을 맞추는 데 최적화되어 있어, &lt;b&gt;학습이 길어질수록 오차가 누적&lt;/b&gt;되어 성능이 저하됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Tiny ImageNet&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;876&quot; data-origin-height=&quot;573&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cq1AWH/btsN7ztTmMb/4BmDugIhU9kIHmMBu6OePk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cq1AWH/btsN7ztTmMb/4BmDugIhU9kIHmMBu6OePk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cq1AWH/btsN7ztTmMb/4BmDugIhU9kIHmMBu6OePk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcq1AWH%2FbtsN7ztTmMb%2F4BmDugIhU9kIHmMBu6OePk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;471&quot; height=&quot;308&quot; data-origin-width=&quot;876&quot; data-origin-height=&quot;573&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Distribution Matching (DM)외의&lt;b&gt; 대부분의 Dataset Distillation 방법들은 메모리 및 시간 소모가 매우 커서 큰 해상도에서는 제대로 작동하지 못함.&lt;/b&gt; 반면, &lt;b&gt;제안 방법은 뛰어난 성능을 보여줌.&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;ImageNet Subsets&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;864&quot; data-origin-height=&quot;284&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/LvK1A/btsN8BYDvYG/9y6FifV6yZ3ZipuRY81vk1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/LvK1A/btsN8BYDvYG/9y6FifV6yZ3ZipuRY81vk1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/LvK1A/btsN8BYDvYG/9y6FifV6yZ3ZipuRY81vk1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FLvK1A%2FbtsN8BYDvYG%2F9y6FifV6yZ3ZipuRY81vk1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;446&quot; height=&quot;147&quot; data-origin-width=&quot;864&quot; data-origin-height=&quot;284&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Tiny ImageNet 실험과 유사하게, 대부분의 기존 기법들은 이 정도 해상도에 적용하기 어려움. 따라서, 비교 대상으로 전체 real dataset으로 학습된 네트워크를 사용함.&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Discussion and Limitations&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;제안한 방법은 &lt;b&gt;short-range single-step matching&lt;/b&gt;에 의존하지 않으며, 그렇다고 &lt;b&gt;전체 학습 과정을 직접 최적화하는 full-process 방식&lt;/b&gt;에도 의존하지 않음. 오히려 두 접근법 &lt;b&gt;사이의 균형을 잡는 전략&lt;/b&gt;을 통해, &lt;b&gt;안정성과 성능 면에서 기존 방법들을 모두 능가&lt;/b&gt;함.&lt;/li&gt;
&lt;li&gt;본 방법은 &lt;b&gt;$128 \times 128$ 해상도의 ImageNet 이미지에 확장된 최초의 증류 기법&lt;/b&gt;임.&lt;/li&gt;
&lt;li&gt;제안한 방식은 expert trajectories을 사전 계산하여 &lt;b&gt;메모리 사용량을 줄일 수 있는 장점&lt;/b&gt;이 있지만, 동시에 전문가 모델 학습과 궤적 저장을 위한 디스크 공간 및 계산 비용이 요구된다는 &lt;b&gt;한계점&lt;/b&gt;이 존재함.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Dataset Distillation</category>
      <category>computer_vision</category>
      <category>dataset_distillation</category>
      <category>parameter_matching</category>
      <category>trajectory_matching</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/50</guid>
      <comments>https://hakk35.tistory.com/50#entry50comment</comments>
      <pubDate>Thu, 22 May 2025 15:30:24 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] Dataset condensation with gradient matching (DC)</title>
      <link>https://hakk35.tistory.com/49</link>
      <description>&lt;script&gt; MathJax = { tex: {inlineMath: [['$', '$']]} }; &lt;/script&gt;
&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js&quot;&gt;&lt;/script&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of &lt;/i&gt;&lt;/span&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://arxiv.org/pdf/2006.05929&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Dataset&amp;nbsp;condensation&amp;nbsp;with&amp;nbsp;gradient&amp;nbsp;matching&lt;/a&gt;&quot; presented at ICLR 2021.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Dataset Distillation을, &lt;b&gt;전체 학습 데이터와 소수의 합성 데이터에서 학습된 신경망 가중치의 gradient 간의 일치 문제(gradient matching problem)&lt;/b&gt;로 정식화함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Introduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1556&quot; data-origin-height=&quot;646&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c9hTUm/btsN19OHBUM/32uqbCqg3bjAeka1BHxXV1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c9hTUm/btsN19OHBUM/32uqbCqg3bjAeka1BHxXV1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c9hTUm/btsN19OHBUM/32uqbCqg3bjAeka1BHxXV1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc9hTUm%2FbtsN19OHBUM%2F32uqbCqg3bjAeka1BHxXV1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1556&quot; height=&quot;646&quot; data-origin-width=&quot;1556&quot; data-origin-height=&quot;646&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;대규모 데이터를 효과적으로 처리하는 전통적인 방법&lt;/b&gt;은 &lt;b&gt;coreset construction&lt;/b&gt;이며, 이는 &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;&lt;b&gt;클러스터링 기반의 접근법&lt;/b&gt;을 사용함. 또한, &lt;b&gt;continual learning&lt;/b&gt;이나 &lt;b&gt;active learning&lt;/b&gt;을 통해 대규모 데이터를 효율적으로 다루려는 연구도 활발히 진행되고 있음.&lt;/li&gt;
&lt;li&gt;이러한 방법들은 일반적으로 &lt;b&gt;대표성을 정의하는 기준&lt;/b&gt;(e.g., diversity, representation 등)을 먼저 설정하고, 해당 기준에 따라 &lt;b&gt;대표 샘플을 선택&lt;/b&gt;한 뒤, 선택된 소규모 데이터셋으로 &lt;b&gt;downstream 작업&lt;/b&gt;(e.g., classification 등)을 위한 &lt;b&gt;모델을 학습&lt;/b&gt;함.&lt;/li&gt;
&lt;li&gt;그러나 이러한 접근법들은 heuristic에 의존하기 때문에 &lt;b&gt;downstream 작업에 대해 최적이라는 보장이 없으며&lt;/b&gt;, 실제로 &lt;b&gt;대표성 있는 샘플이 존재한다는 것도 보장되지 않음.&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;본 논문은 이러한 한계를 극복하기 위해, &lt;b&gt;대규모 원본 데이터와 소규모 합성 데이터로부터 학습된 신경망의 gradient 간 차이를 최소화하는 gradient matching 기반의 dataset distillation 방법을 최초로 제안&lt;/b&gt;함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*전체 데이터들을 몇 개의 중심점(대표 샘플)으로 요약함.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Method&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Condensation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Deep neural network $\phi$는 전체 데이터셋 $\mathcal{T}$에 대해서 다음의 empirical loss를 최소화하여 parameter $\theta$를 최적화함.&lt;br /&gt;$$ &lt;br /&gt;\theta^{\mathcal{T}} = \arg\min_{\theta} \mathcal{L}^{\mathcal{T}}(\theta);\quad \mathcal{L}^{\mathcal{T}}(\theta)&amp;nbsp;=&amp;nbsp;\frac{1}{|\mathcal{T}|}&amp;nbsp;\sum_{(x,&amp;nbsp;y)&amp;nbsp;\in&amp;nbsp;\mathcal{T}}&amp;nbsp;\ell(\phi_\theta(x),&amp;nbsp;y)&lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;Dataset distillation의 목적은 condensed synthetic samples $\mathcal{S}$을 만드는 것으로, 이를 통해 학습한 모델은 다음과 같음.&lt;br /&gt;$$&lt;br /&gt;\theta^{\mathcal{S}} = \arg\min_{\theta} \mathcal{L}^{\mathcal{S}}(\theta);\quad \mathcal{L}^{\mathcal{S}}(\theta) = \frac{1}{|\mathcal{S}|} \sum_{(s, y) \in \mathcal{S}} \ell(\phi_\theta(s), y)&lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;이를 통해 얻은 $\phi_{\theta^\mathcal{S}}$ 모델의 일반화 성능이 $\phi_{\theta^\mathcal{T}}$의 일반화 성능과 최대한 가까워야함.&lt;br /&gt;$$ &lt;br /&gt;\mathbb{E}_{x&amp;nbsp;\sim&amp;nbsp;P_{\mathcal{D}}}&amp;nbsp;\left[&amp;nbsp;\ell\left(&amp;nbsp;\phi_{\theta^{\mathcal{T}}}(x),&amp;nbsp;y&amp;nbsp;\right)&amp;nbsp;\right]&amp;nbsp;\simeq&amp;nbsp;\mathbb{E}_{x&amp;nbsp;\sim&amp;nbsp;P_{\mathcal{D}}}&amp;nbsp;\left[&amp;nbsp;\ell\left(&amp;nbsp;\phi_{\theta^{\mathcal{S}}}(x),&amp;nbsp;y&amp;nbsp;\right)&amp;nbsp;\right] &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;초기 Dataset Distillation 논문 [&lt;a href=&quot;https://hakk35.tistory.com/48&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;related post&lt;/a&gt;]은 모델 파라미터 $\theta^\mathcal{S}$를 &lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;synthetic data $\mathcal{S}$의 함수로 정의함. 이를 통해 최적의 synthetic images $\mathcal{S}^*$에 학습된 모델 $\theta^\mathcal{S}$이 original dataset $\mathcal{T}$에 대해서 학습 손실이 최소가 되도록 함.&lt;br /&gt;$$ &lt;br /&gt;\mathcal{S}^* = \arg\min_\mathcal{S} \mathcal{L}^{\mathcal{T}}(\theta^{\mathcal{S}}(\mathcal{S})) \quad \text{subject to} \quad \theta^{\mathcal{S}}(\mathcal{S}) = \arg\min_{\theta} \mathcal{L}^{\mathcal{S}}(\theta) &lt;br /&gt;$$&lt;/span&gt;&lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;하지만, 이는 &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;nested loop optimization을 포함하고 있으므로 계산 비용이 높음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*바깥 루프에서는 합성 데이터 $\mathcal{S}$를 업데이트하고, 안쪽 루프에서는 현재 $\mathcal{S}$에 대해 $\theta_\mathcal{S}$를 새로 학습해야 함. 이때, 합성 데이터 $\mathcal{S}$의 gradient를 구하기 위해서는 내부 루프에서 전체 신경망을 다시 학습해야 함.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Condensation with Parameter Matching&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Parameter matching은 합성 데이터 $\mathcal{S}$에서 학습한 모델 $\phi_\theta^\mathcal{S}$이 원본 데이터에서 학습한 모델 $\phi_\theta^\mathcal{T}$와 유사한 일반화 성능을 얻을 뿐 아니라, 파라미터 공간 상에서 유사한 해 $(\theta^\mathcal{S} \approx \theta^\mathcal{T})$로 수렴하도록 유도함.&lt;/li&gt;
&lt;li&gt;$\phi_\theta$가 locally smooth function일 때, 유사한 weight $(\theta^\mathcal{S} \approx \theta^\mathcal{T})$는 국소 영역에서 유사한 mapping을 의미하고, 결과적으로 유사한 일반화 성능을 의미함. 이러한 목표는 다음의 식으로 표현될 수 있음.&lt;br /&gt;$$ &lt;br /&gt;\min_\mathcal{S} D(\theta^\mathcal{S}, \theta^\mathcal{T}) \quad \text{subject to} \quad \theta^\mathcal{S}(\mathcal{S}) = \arg\min_{\theta} \mathcal{L}^\mathcal{S}(\theta) &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;즉, $\theta^\mathcal{S}$를 $\mathcal{S}$ 데이터에서 훈련하여 얻은 최적의 파라미터라고 할때, $\theta^\mathcal{S}$와 $\theta^\mathcal{T}$간의 거리를 최소화하여 $\mathcal{S}$를 최적화 하는 문제임.&lt;/li&gt;
&lt;li&gt;위는 하나의 고정된 초기값 $\theta_0$에서 학습된 모델에 최적화된 합성데이터를 얻지만, 실제로는 랜덤 초기값에 대해서 잘 작동하는 합성데이터를 만들어야 함.&lt;br /&gt;$$ &lt;br /&gt;\min_\mathcal{S} \mathbb{E}_{\theta_0 \sim P_{\theta_0}} \left[ D(\theta^\mathcal{S}(\theta_0), \theta^\mathcal{T}(\theta_0)) \right] &lt;br /&gt;\quad&amp;nbsp;\text{subject&amp;nbsp;to}&amp;nbsp;\quad &lt;br /&gt;\theta^\mathcal{S}(\mathcal{S}) = \arg\min_{\theta} \mathcal{L}^\mathcal{S}(\theta(\theta_0)) &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;하지만, 이 또한 합성 데이터 $\mathcal{S}$에 따라 모델 $\theta_\mathcal{S}$를 다시 학습해야 하기 때문에, 매우 큰 계산 비용이 요구됨. 이를 해결하기 위해서, $\theta^\mathcal{S}$를 &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;incomplete optimization의 출력으로 재정의하는 back-optimization 접근을 활용할 수 있음.&lt;br /&gt;$$ &lt;br /&gt;\theta^\mathcal{S}(\mathcal{S}) = \text{opt-alg}_{\theta}(\mathcal{L}^\mathcal{S}(\theta), \varsigma) &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;실제 구현에서는 서로 다른 초기값에 대해 $\theta_\mathcal{T}$를 미리 offline으로 학습해두고, 이를 target parameter vector로 사용할 수 있지만, 이는 아래의 두가지 문제가 있음.
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;$\theta_\mathcal{S}$가 학습되는 중간 단계에서는 $\theta_\mathcal{T}$와의 거리가 매우 멀 수 있으며, 이 경로상에 다수의 local minimum가 존재해 도달하기 어려움.&lt;/li&gt;
&lt;li&gt;$\text{opt-alg}$ 최적화 과정은 계산 속도와 정확도 간의 trade-off로 인해 제한된 step $(\varsigma)$만 진행되므로 최적해에 도달하기 어려움.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323; text-align: right;&quot;&gt;*최적의 해를 다 찾기 전에 중간에서 멈추는 최적화, 즉 중간 몇 step까지만 최적화를 진행하고 멈춤.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Condensation with Curriculum Gradient Matching&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Parameter matching의 문제를 해결하기 위해 curriculum 기반의 방법을 제안하여, $\theta^\mathcal{S}$가 최종 $\theta^\mathcal{T}$와 가까워지는 것뿐만 아니라, &lt;span style=&quot;color: #ee2323;&quot;&gt;*&lt;/span&gt;$\theta^\mathcal{S}$와 비슷한 경로를 따르도록 함.&lt;br /&gt;$$ &lt;br /&gt;\min_\mathcal{S} \mathbb{E}_{\theta_0 \sim P_{\theta_0}} \left[ \sum_{t=0}^{T-1} D(\theta_t^\mathcal{S} , \theta_t^\mathcal{T}) \right] &lt;br /&gt;\quad \text{subject to} $$ $$&lt;br /&gt;\theta_{t+1}^\mathcal{S}(\mathcal{S}) = \text{opt-alg}_\theta(\mathcal{L}^S(\theta_t^\mathcal{S}), \varsigma^\mathcal{S}) &lt;br /&gt;\quad&amp;nbsp;\text{and}&amp;nbsp;\quad &lt;br /&gt;\theta_{t+1}^\mathcal{T} = \text{opt-alg}_\theta(\mathcal{L}^\mathcal{T}(\theta_t^\mathcal{T}), \varsigma^\mathcal{T}) &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;이를 통해, 매 iteration마다, 합성데이터 $\mathcal{S}$로 학습된 파라미터 $\theta^\mathcal{S}_t$가 원본데이터로 학습된 파라미터 $\theta^\mathcal{T}_t$와 유사하도록 합성데이터 $\mathcal{S}$를 학습하게 됨.&lt;/li&gt;
&lt;li&gt;$D(\theta^\mathcal{S}_t, \theta^\mathcal{T}_t) \approx 0$을 통해서, $\theta^\mathcal{T}_t$를 $\theta^\mathcal{S}_t$로 대체하고 $\theta^\mathcal{S}$를 $\theta$로 표기하면 다음과 같이 단순화할 수 있음.&lt;br /&gt;$$ &lt;br /&gt;\theta_{t+1}^\mathcal{S} \leftarrow \theta_t^\mathcal{S} - \eta_\theta \nabla_\theta \mathcal{L}^S(\theta_t^\mathcal{S}) &lt;br /&gt;\quad&amp;nbsp;\text{and}&amp;nbsp;\quad &lt;br /&gt;\theta_{t+1}^\mathcal{T} \leftarrow \theta_t^\mathcal{T} - \eta_\theta \nabla_\theta \mathcal{L}^T(\theta_t^\mathcal{T}) &lt;br /&gt;$$ $$ &lt;br /&gt;\min_S&amp;nbsp;\mathbb{E}_{\theta_0&amp;nbsp;\sim&amp;nbsp;P_{\theta_0}}&amp;nbsp;\left[&amp;nbsp;\sum_{t=0}^{T-1}&amp;nbsp;D\left(&amp;nbsp;\nabla_\theta&amp;nbsp;\mathcal{L}^S(\theta_t),&amp;nbsp;\nabla_\theta&amp;nbsp;\mathcal{L}^T(\theta_t)&amp;nbsp;\right)&amp;nbsp;\right]. &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;즉, 모델 파라미터 $\theta$에 대한 원본데이터 loss와 합성데이터 loss의 gradient를 일치시키도록 &lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;$\mathcal{S}$를 업데이트할 수 있음&lt;/span&gt;. 이를 통해, &lt;span style=&quot;color: #006dd7;&quot;&gt;*&lt;/span&gt;이전 파라미터들에 대한 계산 그래프를 unroll할 필요가 없다는 장점이 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*$\theta$가 자유롭게 최적화되는 걸 제한할 수 있지만, 원하는 방향으로 수렴하도록 최적화 방향을 더 잘 안내해주고, step 수가 적은 optimization이라도 좋은 결과를 얻을 수 있음.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;*기존 방법은 모델 파라미터가 여러 스텝에 걸쳐 업데이트되는 전체 과정을 추적해야 하며, 그 경로에 따라 역전파를 적용할 수 있도록 계산 그래프를 풀어서(unroll) 저장해야 함. 즉, $(\theta_1 \rightarrow \theta_2 \rightarrow \dots \rightarrow \theta_T)$. 따라서 이는 시간과 메모리 소모가 큼. &lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;반면, gradient mathcing 방법은 현재 시점의 파라미터에 대한 gradient만 계산하면 되므로, 파라미터 경로를 역추적하거나 저장할 필요가 없음. 즉, 계산 그래프를 unroll할 필요가 없음.&lt;/span&gt;&lt;/p&gt;
&lt;h4 style=&quot;text-align: left;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;Algorithm&lt;/b&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1012&quot; data-origin-height=&quot;443&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vRXmY/btsN4GkSsq0/lD3B7oky8Klqt59589alSk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vRXmY/btsN4GkSsq0/lD3B7oky8Klqt59589alSk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vRXmY/btsN4GkSsq0/lD3B7oky8Klqt59589alSk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvRXmY%2FbtsN4GkSsq0%2FlD3B7oky8Klqt59589alSk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1012&quot; height=&quot;443&quot; data-origin-width=&quot;1012&quot; data-origin-height=&quot;443&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;합성데이터가 다양한 초기 모델에서도 잘 작동하도록, outer loop에서는 매번 $\theta$를 무작위로 초기화한 뒤 그에 맞춰 합성데이터를 학습시킴.&lt;/li&gt;
&lt;li&gt;$\theta$가 무작위로 초기화되면, 원본데이터에 대한 loss $\mathcal{L}^\mathcal{T}$와 합성데이터에 대한 loss $\mathcal{L}^\mathcal{S}$을 구하고, $\theta$에 대한 gradient를 구함&lt;/li&gt;
&lt;li&gt;gradient $\nabla_\theta\mathcal{L}^\mathcal{S}$를 $\nabla_\theta\mathcal{L}^\mathcal{T}$와 가깝도록 합성데이터 $\mathcal{S}$를 최적화함.
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;매 iteration마다, 하나의 클래스에 해당하는 샘플로만 원본데이터와 합성데이터 손실함수를 계산하며, 각 클래스에 대한 합성데이터를 병렬적으로 업데이트함.&lt;/li&gt;
&lt;li&gt;여러 클래스를 동시에 흉내내는 것 보다, 단일 클래스에 대해 평균 gradient를 모방하는 것이 더 쉬움.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;업데이트된 합성데이터를 사용하여, Loss $\mathcal{L}^\mathcal{S}$가 최소화되도록 $\theta$를 학습시킴.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Gradient mathcing loss&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;$\phi_\theta$가 multi-layered neural network 이므로, matching loss $D$를 layerwise loss $d$의 합으로 표현할 수 있음.&lt;br /&gt;$$ &lt;br /&gt;D(\nabla_\theta \mathcal{L}^\mathcal{S}, \nabla_\theta \mathcal{L}^\mathcal{T}) = \sum_{l=1}^{L} d(\nabla_{\theta^{(l)}} \mathcal{L}^\mathcal{S}, \nabla_{\theta^{(l)}} \mathcal{L}^\mathcal{T}) &lt;br /&gt;$$ $$ &lt;br /&gt;d(\mathbf{A},&amp;nbsp;\mathbf{B})&amp;nbsp;=&amp;nbsp;\sum_{i=1}^{\text{out}}&amp;nbsp;\left(&amp;nbsp;1&amp;nbsp;-&amp;nbsp;\frac{\mathbf{A}_i&amp;nbsp;\cdot&amp;nbsp;\mathbf{B}_i}{\|\mathbf{A}_i\|&amp;nbsp;\|\mathbf{B}_i\|}&amp;nbsp;\right) &lt;br /&gt;$$
&lt;ul style=&quot;list-style-type: circle;&quot; data-ke-list-type=&quot;circle&quot;&gt;
&lt;li&gt;$\mathbf{A}_i, \mathbf{B}_i$는 각 출력 노드 $i$에 해당하는 gradient를 flatten한 vector임.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;b&gt;&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Condensation&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;합성데이터는 Gaussian nosie로부터 초기화되거나 원본데이터에서 무작위로 선택됨.&lt;/li&gt;
&lt;li&gt;Dataset condensation은 합성데이터를 학습하는 단계 ($\text{C}$)와 이 합성데이터에 classifer를 학습하는 단계 $(\text{T})$의 두 단계로 이루어져 있음.&lt;/li&gt;
&lt;li&gt;실험평가를 위해, 첫번 째 단계에서는 5개의 합성데이터를 생성하고, 두번 째 단계에서는 각 합성데이터에 대해서 20개의 무작위로 초기화된 모델이 학습됨. 즉, 100개의 모델이 평가됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;922&quot; data-origin-height=&quot;386&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bGWPjy/btsN4F7vX34/jokzJXRQub2PWZ0VKePzkK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bGWPjy/btsN4F7vX34/jokzJXRQub2PWZ0VKePzkK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bGWPjy/btsN4F7vX34/jokzJXRQub2PWZ0VKePzkK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbGWPjy%2FbtsN4F7vX34%2FjokzJXRQub2PWZ0VKePzkK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;922&quot; height=&quot;386&quot; data-origin-width=&quot;922&quot; data-origin-height=&quot;386&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Cross-architecture generalization&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 논문에서 제안한 방법은 하나의 네트워크 구조에서 학습된 합성이미지를 다른 네트워크 구조를 학습하는 데에도 사용할 수 있다는 장점이 있음. Table 2는 다양한 모델을 대상으로, 합성이미지가 구조에 상관없이 잘 작동한다는 것을 보여줌.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1068&quot; data-origin-height=&quot;325&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bunfHg/btsN2X9GeAS/mjEpI8cxkItW4lVn89KFq1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bunfHg/btsN2X9GeAS/mjEpI8cxkItW4lVn89KFq1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bunfHg/btsN2X9GeAS/mjEpI8cxkItW4lVn89KFq1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbunfHg%2FbtsN2X9GeAS%2FmjEpI8cxkItW4lVn89KFq1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1068&quot; height=&quot;325&quot; data-origin-width=&quot;1068&quot; data-origin-height=&quot;325&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Applications&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Continual Learning&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1057&quot; data-origin-height=&quot;462&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/upP0N/btsN48Pb9Dd/pM0pQYiKWK3e1kl0yHNkck/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/upP0N/btsN48Pb9Dd/pM0pQYiKWK3e1kl0yHNkck/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/upP0N/btsN48Pb9Dd/pM0pQYiKWK3e1kl0yHNkck/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FupP0N%2FbtsN48Pb9Dd%2FpM0pQYiKWK3e1kl0yHNkck%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1057&quot; height=&quot;462&quot; data-origin-width=&quot;1057&quot; data-origin-height=&quot;462&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Neural Architecture Search&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Dataset Distillation으로 합성한 이미지를 활용하면, 다양한 모델을 빠르게 학습시키고 성능을 검증하여 최적의 구조를 효율적으로 얻을 수 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1071&quot; data-origin-height=&quot;221&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/znEsU/btsN3LVrfz9/rPykkbvSrCraksTEXUnIRK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/znEsU/btsN3LVrfz9/rPykkbvSrCraksTEXUnIRK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/znEsU/btsN3LVrfz9/rPykkbvSrCraksTEXUnIRK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FznEsU%2FbtsN3LVrfz9%2FrPykkbvSrCraksTEXUnIRK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1071&quot; height=&quot;221&quot; data-origin-width=&quot;1071&quot; data-origin-height=&quot;221&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 논문은 &lt;b&gt;최초의 gradient matching 기반 dataset distillation 방법을 제안&lt;/b&gt;함.&lt;/li&gt;
&lt;li&gt;제안된 방법으로 생성된 이미지들은 &lt;b&gt;특정 모델 구조에 종속되지 않기 때문에, 서로 다른 구조의 모델들을 학습하는 데에도 활용&lt;/b&gt;될 수 있음.&lt;/li&gt;
&lt;li&gt;ImageNet처럼 &lt;b&gt;복잡하고 고해상도의 데이터셋으로 확장할 필요가 있음.&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Dataset Distillation</category>
      <category>computer_vision</category>
      <category>dataset_distillation</category>
      <category>gradient_matching</category>
      <category>parameter_matching</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/49</guid>
      <comments>https://hakk35.tistory.com/49#entry49comment</comments>
      <pubDate>Tue, 20 May 2025 14:28:35 +0900</pubDate>
    </item>
    <item>
      <title>[Paper Review] Dataset Distillation (DD)</title>
      <link>https://hakk35.tistory.com/48</link>
      <description>&lt;script&gt; MathJax = { tex: {inlineMath: [['$', '$']]} }; &lt;/script&gt;
&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js&quot;&gt;&lt;/script&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;This is a Korean review of &lt;/i&gt;&lt;/span&gt;&lt;span style=&quot;color: #9d9d9d;&quot;&gt;&lt;i&gt;&quot;&lt;a href=&quot;https://arxiv.org/pdf/1811.10959&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Dataset Distillation&lt;/a&gt;&quot; presented at arXiv 2018.&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;TL;DR&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;전체 학습 데이터의 지식을 소수의 합성 데이터로 압축하는 Dataset Distillation 방법을 최초로 제안&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Intoduction&lt;/b&gt;&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;786&quot; data-origin-height=&quot;741&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/B9ZaJ/btsN2STbvX6/DUL4TmEdsuWwKsoY62xP81/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/B9ZaJ/btsN2STbvX6/DUL4TmEdsuWwKsoY62xP81/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/B9ZaJ/btsN2STbvX6/DUL4TmEdsuWwKsoY62xP81/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FB9ZaJ%2FbtsN2STbvX6%2FDUL4TmEdsuWwKsoY62xP81%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;786&quot; height=&quot;741&quot; data-origin-width=&quot;786&quot; data-origin-height=&quot;741&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 논문은 &lt;b&gt;고정된 모델에 대해 전체 훈련 데이터셋을 몇 장의 합성 이미지로 압축하는 Dataset Distillation이라는 새로운 과제&lt;/b&gt;를 제안함.&lt;/li&gt;
&lt;li&gt;일반적으로 &lt;b&gt;합성 데이터는 실제 데이터와 분포가 달라 학습에 부적합&lt;/b&gt;하다고 여겨지지만, 본 연구는 소수의 synthetic data만으로도 이미지 분류 모델을 효과적으로 학습시킬 수 있음을 보여줌.&lt;/li&gt;
&lt;li&gt;이를 위해 &lt;span style=&quot;color: #006dd7;&quot;&gt;*&lt;/span&gt;&lt;b&gt;모델의 파라미터를 합성 이미지의 미분 가능한 함수로 표현하고, 가중치를 직접 최적화하는 대신 합성 이미지의 픽셀값을 최적화&lt;/b&gt;하는 방식을 사용함.&lt;/li&gt;
&lt;li&gt;다만 이 접근은 초기 파라미터에 대한 접근을 요구하므로, 이를 완화하기 위해 &lt;span style=&quot;color: #ee2323; text-align: left;&quot;&gt;**&lt;/span&gt;&lt;b&gt;랜덤 초기화를 고려한 distilled image 생성 방식&lt;/b&gt;도 제안함.&lt;/li&gt;
&lt;li&gt;더 나아가, &lt;b&gt;여러 에폭에 걸쳐 학습할 수 있는 distilled image 시퀀스를 생성하는 iterative 버전&lt;/b&gt;도 함께 제안되어 성능을 추가적으로 향상시킴.&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;*모델의 업데이트가 합성 이미지에 의해 결정되므로, 이 합성 이미지도 마치 파라미터처럼 최적화하여 real data에서 좋은 성능을 내도록 학습할 수 있음.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;**특정 단일 초기 파라미터에서 학습된 합성데이터는 다른 초기 파라미터를 가진 모델에서 성능이 떨어질 수 있으므로, 초기 파라미터를 확률 분포에서 샘플링하여 다양한 초기화에 대응할 수 있음.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Related Works&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset&amp;nbsp;pruning,&amp;nbsp;core-set&amp;nbsp;construction,&amp;nbsp;and&amp;nbsp;instance&amp;nbsp;selection&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Dataset pruning, core-set construction, and instance selection 계열의 방법들은 전체 데이터셋 중 모델 학습에 중요한 일부 샘플만 사용하거나, active learning을 통해 의미 있는 샘플만 라벨링하는 방식으로 데이터셋을 압축함.&lt;/li&gt;
&lt;li&gt;하지만 이러한 방법들은 실제 이미지만을 사용해야 하므로, 클래스당 많은 수의 샘플이 필요함.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt; Approach&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Sec. 3.1: &lt;b&gt;고정된 초기값&lt;/b&gt;에서 &lt;b&gt;한번의 gradient descent&lt;/b&gt;만으로 네트워크를 학습시키시는 optimization 알고리즘&lt;/li&gt;
&lt;li&gt;Sec. 3.2: &lt;b&gt;랜덤 초기화&lt;/b&gt;에서의 optimization&lt;/li&gt;
&lt;li&gt;Sec. 3.4: &lt;b&gt;여러 번의 gradient descent step&lt;/b&gt;과 &lt;b&gt;여러 epoch 학습&lt;/b&gt;으로 확장&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Optimizing Distilled Data&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;sinlge step으로 만든 합성데이터 $( \tilde{\mathbf{x}} )$ 가 실제 데이터 $ (\mathbf{x}) $ 에서도 높은 성능을 달성하기 위해 다음의 수식을 적용함.&lt;br /&gt;$$ &lt;br /&gt;\theta_1 = \theta_0 - \tilde{\eta} \nabla_{\theta_0} \ell( \tilde{\mathbf{x}} , \theta_0) &lt;br /&gt;$$ $$ &lt;br /&gt;\tilde{\mathbf{x}}^*,&amp;nbsp;\tilde{\eta}^*&amp;nbsp;=&amp;nbsp;\arg\min_{\tilde{\mathbf{x}},&amp;nbsp;\tilde{\eta}}&amp;nbsp;\mathcal{L}(\tilde{\mathbf{x}},&amp;nbsp;\tilde{\eta};&amp;nbsp;\theta_0) &lt;br /&gt;=&amp;nbsp;\arg\min_{\tilde{\mathbf{x}},&amp;nbsp;\tilde{\eta}}&amp;nbsp;\ell(\mathbf{x},&amp;nbsp;\theta_1) &lt;br /&gt;=&amp;nbsp;\arg\min_{\tilde{\mathbf{x}},&amp;nbsp;\tilde{\eta}}&amp;nbsp;\ell\left(\mathbf{x},&amp;nbsp;\theta_0&amp;nbsp;-&amp;nbsp;\tilde{\eta}&amp;nbsp;\nabla_{\theta_0}&amp;nbsp;\ell(\tilde{\mathbf{x}},&amp;nbsp;\theta_0)\right) &lt;br /&gt;$$&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Distilation for Random Initialization&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;499&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/w9tlu/btsN12Ww3SQ/hcgjkhrZ1cKjlC7ZEJfK61/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/w9tlu/btsN12Ww3SQ/hcgjkhrZ1cKjlC7ZEJfK61/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/w9tlu/btsN12Ww3SQ/hcgjkhrZ1cKjlC7ZEJfK61/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fw9tlu%2FbtsN12Ww3SQ%2FhcgjkhrZ1cKjlC7ZEJfK61%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;850&quot; height=&quot;499&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;499&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;3.1절에서 제안된 고정된 초기값에 대한 최적화 방법은, &lt;b&gt;다른 초기값에 대해서는 일반화 성능이 좋지 않음. &lt;/b&gt;&lt;/li&gt;
&lt;li&gt;이러한 distilled data는 &lt;b&gt;랜덤 노이즈처럼 보이기도 하는데&lt;/b&gt; (Fig. 2), 이는 해당 데이터가 &lt;b&gt;훈련 데이터뿐 아니라 특정 초기 가중치까지 암묵적으로 인코딩하고 있기 때문&lt;/b&gt;임.&lt;/li&gt;
&lt;li&gt;따라서, 특정 분포 $ p(\theta_0) $ 로부터 샘플링된 &lt;b&gt;랜덤 초기화 네트워크에서도 잘 작동&lt;/b&gt;하도록, 다음과 같은 &lt;b&gt;기대값 기반 최적화 문제&lt;/b&gt;를 정의함.&lt;br /&gt;$$ &lt;br /&gt;\tilde{\mathbf{x}}^*,&amp;nbsp;\tilde{\eta}^*&amp;nbsp;=&amp;nbsp;\arg\min_{\tilde{\mathbf{x}},&amp;nbsp;\tilde{\eta}}&amp;nbsp;\mathbb{E}_{\theta_0&amp;nbsp;\sim&amp;nbsp;p(\theta_0)}&amp;nbsp;\mathcal{L}(\tilde{\mathbf{x}},&amp;nbsp;\tilde{\eta};&amp;nbsp;\theta_0) &lt;br /&gt;$$&lt;img style=&quot;text-align: center; caret-color: transparent; letter-spacing: 0px;&quot; src=&quot;https://blog.kakaocdn.net/dn/tenmV/btsN1s9iGod/WR2yKw27Ff12pjYXM8dlS0/img.png&quot; data-is-animation=&quot;false&quot; data-origin-height=&quot;368&quot; data-origin-width=&quot;855&quot; /&gt;&lt;/li&gt;
&lt;li&gt;이렇게 얻어진 합성 데이터는 &lt;b&gt;보지 않은 초기화에 대해서도 잘 일반화&lt;/b&gt;되며, &lt;b&gt;각 클래스의 판별적인 특징을 시각적으로 잘 담고 있는 정보량 높은 이미지들로 나타남&lt;/b&gt; (Fig. 3). &lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;600&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bbigsE/btsN2ZEy69u/ZOGtGTt3kennVEcwWIdwiK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bbigsE/btsN2ZEy69u/ZOGtGTt3kennVEcwWIdwiK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bbigsE/btsN2ZEy69u/ZOGtGTt3kennVEcwWIdwiK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbbigsE%2FbtsN2ZEy69u%2FZOGtGTt3kennVEcwWIdwiK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;850&quot; height=&quot;600&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;600&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;다만, 이 방법이 잘 작동하기 위해서는, 초기화 $ \theta_0&amp;nbsp;\sim&amp;nbsp;p(\theta_0) $에 따라 손실 함수가 가지는 &lt;b&gt;로컬 조건&lt;/b&gt; (예: 손실 함수의 곡률, gradient 크기, 업데이트 방향 등)이 &lt;b&gt;유사해야 해야 함.&lt;/b&gt; &amp;rarr; 그래야 같은 합성 이미지를 사용해도 &lt;b&gt;모델이 전혀 다른 방향으로 업데이트되는 문제를 피할 수 있음&lt;/b&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;color: #ee2323; text-align: left;&quot;&gt;&amp;para;&lt;/span&gt;Analysis of A Simple Linear Case&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;선형 회귀 문제 분석을 통해, 한번의 gradient descent step으로 어떠한 초기화에도 잘 작동하는 합성 데이터를 만들기 위해서는 (i.e., 정확한 global minimum을 달성하기 위해서는), 합성 데이터 수가 feature 차원 수 이상이어야 함. &amp;rarr; &lt;b&gt;실제 이미지 데이터는 수천~수십만 차원이기 때문에 현실적으로는 제한적&lt;/b&gt;임.&lt;/li&gt;
&lt;li&gt;따라서, $ &lt;span style=&quot;color: #333333; text-align: left;&quot;&gt;p(\theta_0)&lt;span&gt; &lt;/span&gt;&lt;/span&gt;$ 분포를 적절하게 제한하여, 유사한 로컬 조건을 가지는 초기화들만 적용해야 실용적인 학습이 가능함.&amp;nbsp; &amp;rarr; &lt;b&gt;여러 번의 gradient descent step과 여러 epoch 학습으로 확장 필요&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;text-align: right;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323; text-align: left;&quot;&gt;&amp;para;원문 참고&lt;/span&gt;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;Multiple Gradient Descent Steps and Multiple Epochs&lt;/h3&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;단일 gradient descent step만으로는 학습이 부족하므로&lt;/b&gt;, 이를 여러 단계로 확장하여 다음과 같이 학습을 수행함.&lt;br /&gt;$$ &lt;br /&gt;\theta_{i+1}&amp;nbsp;=&amp;nbsp;\theta_i&amp;nbsp;-&amp;nbsp;\tilde{\eta}_i&amp;nbsp;\nabla_{\theta_i}&amp;nbsp;\ell(\tilde{\mathbf{x}}_i,&amp;nbsp;\theta_i) &lt;br /&gt;$$&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Multiple epoch&lt;/b&gt;은 위의 gradient descent step 시퀀스 전체를 &lt;b&gt;여러 번 반복&lt;/b&gt;하는 것으로 구현됨.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Experiments&lt;/b&gt;&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Dataset Distillation&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Fixed initialization and &lt;/b&gt;&lt;b&gt;Random initialization&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;846&quot; data-origin-height=&quot;256&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bpT1B5/btsN09CcQJv/blsqQxLzdH9Jx5G6tWzlRk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bpT1B5/btsN09CcQJv/blsqQxLzdH9Jx5G6tWzlRk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bpT1B5/btsN09CcQJv/blsqQxLzdH9Jx5G6tWzlRk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbpT1B5%2FbtsN09CcQJv%2FblsqQxLzdH9Jx5G6tWzlRk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;846&quot; height=&quot;256&quot; data-origin-width=&quot;846&quot; data-origin-height=&quot;256&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Multiple gradient descent steps and multiple epochs&lt;/b&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;초기 step에서는 이미지가 noise가 가까워 보이지만, 이후에는 real data처럼 보이고&lt;/b&gt;, 각 클래스에 대한 discriminative feature를 공유함 (Fig. 3).&lt;/li&gt;
&lt;li&gt;&lt;b&gt; 더 오래 (more steps), 더 반복 (more epoch) 해서 학습&lt;/b&gt;할수록 모델은 distilled image로부터 더 많은 지식을 흡수할 수 있음.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;851&quot; data-origin-height=&quot;279&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b4IGQg/btsN1ieQF2g/f2HKoM1pmTD7ITgViJp68K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b4IGQg/btsN1ieQF2g/f2HKoM1pmTD7ITgViJp68K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b4IGQg/btsN1ieQF2g/f2HKoM1pmTD7ITgViJp68K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb4IGQg%2FbtsN1ieQF2g%2Ff2HKoM1pmTD7ITgViJp68K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;851&quot; height=&quot;279&quot; data-origin-width=&quot;851&quot; data-origin-height=&quot;279&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;동일한 distilled image에서, &lt;b&gt;multiple steps을 사용하는 것이 sinlge step을 사용하는 것보다 더욱 뛰어난 성능&lt;/b&gt;을 보여줌.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;853&quot; data-origin-height=&quot;295&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bpYmmA/btsN1tNUEMW/LczxE4UPVAMfY3IlGr4YwK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bpYmmA/btsN1tNUEMW/LczxE4UPVAMfY3IlGr4YwK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bpYmmA/btsN1tNUEMW/LczxE4UPVAMfY3IlGr4YwK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbpYmmA%2FbtsN1tNUEMW%2FLczxE4UPVAMfY3IlGr4YwK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;853&quot; height=&quot;295&quot; data-origin-width=&quot;853&quot; data-origin-height=&quot;295&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;Discussion&lt;/b&gt;&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;본 논문은 &lt;b&gt;전체 학습 데이터의 지식을 소수의 합성 데이터로 압축하는 Dataset Distillation 방법을 최초로 제안&lt;/b&gt;함.&lt;/li&gt;
&lt;li&gt;제안된 방법은 &lt;b&gt;small distillaed image와 several gradient descent step만으로 높은 분류 성능&lt;/b&gt;을 달성할 수 있음.&lt;/li&gt;
&lt;li&gt;향후에는 &lt;b&gt;ImageNet과 같은 대규모 시각 데이터&lt;/b&gt;뿐만 아니라, &lt;b&gt;오디오&amp;middot;텍스트 등 다양한 데이터 형태로의 확장&lt;/b&gt;이 필요함.&lt;/li&gt;
&lt;li&gt;현재 방법은 &lt;b&gt;모델 초기화 분포에 민감하다는 한계&lt;/b&gt;가 있음. &amp;rarr; &lt;b&gt;보다 강건한 초기화 전략&lt;/b&gt;에 대한 추가 연구가 필요함.&lt;/li&gt;
&lt;/ul&gt;</description>
      <category>Paper Review/Dataset Distillation</category>
      <category>computer_vision</category>
      <category>dataset_distillation</category>
      <category>performance matching</category>
      <author>成學</author>
      <guid isPermaLink="true">https://hakk35.tistory.com/48</guid>
      <comments>https://hakk35.tistory.com/48#entry48comment</comments>
      <pubDate>Sun, 18 May 2025 12:38:45 +0900</pubDate>
    </item>
  </channel>
</rss>