[DL輪読会]陰関数微分を用いた深層学習

>100 Views

October 01, 19

スライド概要

2019/09/27
Deep Learning JP:
http://deeplearning.jp/seminar-2/

シェア

またはPlayer版

埋め込む »CMSなどでJSが使えない場合

(ダウンロード不可)

関連スライド

各ページのテキスト
1.

ӄؔ਺ඍ෼Λ༻͍ͨਂ૚ֶश Deep Learning with Implicit Gradients Shohei Taniguchi, Matsuo Lab (M1) !1

2.

എ‫ܠ‬ • ࠷ۙɺӄؔ਺ඍ෼Λֶशʹ༻͍Δͷ͕ྲྀߦͬͯΔʢΒ͍͠ʣ • ໘നͦ͏ͳͷͰ৭ʑௐ΂ͯΈ·ͨ͠ • ҎԼͷ2ຊͷ࿦จΛϝΠϯͰ঺հ - Meta-Learning with Implicit Gradients ‣ MAMLͰinner updateͷ‫ࢉܭ‬άϥϑΛ อ࣋͢Δ͜ͱͳ͘ॳ‫ظ‬஋ͷߋ৽͕Ͱ͖ΔiMAMLΛఏҊ - RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients? ‣ ޯ഑ফࣦ͕શ͘‫͜ى‬Βͳ͍ERNNΛఏҊ !2

3.

Outline 1. લఏ஌ࣝ - ཅؔ਺ͱӄؔ਺ - ӄؔ਺ඍ෼ͱӄؔ਺ఆཧ 2. ӄؔ਺ඍ෼Λ༻͍ͨ‫ط‬ଘ‫ڀݚ‬ - ӄؔ਺ͷ࢖͍ํʹ‫׳‬Εͯ΋Β͏ͨΊʹΘ͔Γ΍͍͢ྫΛ1ͭ঺հ ‣ Implicit Reparameterization Gradients 3. Meta-Learning with Implicit Gradients 4. RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients? !3

4.

લఏ஌ࣝ !4

5.

ཅؔ਺ͱӄؔ਺ 2 (x) y = f y = ax + bx + c ) ཅؔ਺ɿ! ͷ‫Ͱܗ‬ද͞ΕΔ (e.g. 2࣍ؔ਺ ! • - ม਺ؒͷؔ܎͕ཅʹॻ͖Լ͞Ε͍ͯΔ - ඍ෼΍ੵ෼͕༰қ - ී௨ͷNNͱ͔͸͜ͷ‫ܗ‬ 2 2 2 f x, y = 0 x + y = r ӄؔ਺ɿ! ͷ‫Ͱܗ‬ද͞ΕΔ (e.g. ԁͷํఔࣜ ! ) • ( ) - ม਺ؒͷؔ܎Λଟม਺ͷํఔࣜͷ‫Ͱܗ‬ද͢ - ඍ෼΍ੵ෼͕΍΍໘౗ - ‫ʹີݫ‬͸ؔ਺ͱ͸‫ݶ‬Βͳ͍ (͋Δ!xʹෳ਺ͷ!y͕ରԠ͍ͯ͠Δͱؔ਺ͱ͸‫͍ͨͳ͑ݴ‬Ί) !5

6.

ӄؔ਺ͷॏཁͳఆཧ • ӄؔ਺ඍ෼ fx dy ∂f/∂x =− =− ! f(x, y) = 0ͷͱ͖ ! dx ∂f/∂y fy • ӄؔ਺ఆཧ ! f(x, y) = 0 Λຬͨ͋͢Δ఺ !(x0, y0)ʹ͓͍ͯ!fy (x0, y0) ͕ਖ਼ଇͳΒ ͹ !x0 ∈ U, y ! 0 ∈ V , ࿈ଓతඍ෼Մೳവ਺ !g : U → V Ͱ ! {(x, g(x)) | x ∈ U} = {(x, y) ∈ U × V | f(x, y) = 0} Λຬͨ͢΋ͷ͕ଘࡏ͢Δ !6

7.

ӄؔ਺ఆཧͷ௚‫ײ‬తͳཧղ • ӄؔ਺ !f(x, y) = 0 ͕༩͑ΒΕͨͱ͖ʹɺͦΕΛຬͨ͋͢Δ1఺ !(x0, y0) Λ ‫͚ͯͭݟ‬΍Ε͹ɺͦͷۙ๣Ͱ͸ඍ෼Մೳͳཅؔ਺ʹॻ͖‫͑׵‬ΒΕΔ - ͨͩͦ͠ͷ఺Ͱͷ઀ઢ͕ਨ௚ͳ৔߹ʢ!fy (x0, y0)͕ඇਖ਼ଇʣ͸আ͘ ྫɿԁͷํఔࣜ !x 2 + y 2 − r 2 = 0 - ఺Aͷۙ๣Ͱ͸ !y = r 2 − x 2 ͱॻ͚ͯඍ෼Մೳ - ఺BͰ͸ !fy (r,0) = 2 × 0 = 0 ͱͳΓඇਖ਼ଇͳͷͰ ॻ͖‫͑׵‬ΒΕͳ͍ (!y = ± r 2 − x 2 ͷූ߸͕ఆ·Βͳ͍) • 2࣍‫ݩ‬Ҏ্ͷ৔߹͸ !fy ͕JacobianʹͳΔͷͰͦͷߦྻࣜΛௐ΂Ε͹Α͍!7

8.

ਂ૚ֶशͰӄؔ਺͕༗༻ͳέʔε (ࢲ‫)ݟ‬ 1. ϩεͷҰ෦ʹ‫͍ͳ͖Ͱࢉܭ‬ɺ͋Δ͍͸‫͕ࢉܭ‬໘౗ͳ஋͕͋Δ৔߹ - ϩε͕Θ͔Βͳͯ͘΋ӄؔ਺Λ͏·͘࢖ͬͯϩεͷޯ഑ (ͷۙࣅ஋) ͕Θ͔Ε͹ֶशͰ͖Δ - iMAML͸ͬͪ͜ 2. ಛ௃ྔͳͲʹ͋Δ੍໿Λ͔͚͍ͨ৔߹ - ௨ৗ͸ϩεʹਖ਼ଇԽ߲ΛՃ͑ͯӄʹ੍໿ΛՃ͑Δ͕ɺӄؔ਺Λ͏· ͘࢖͏ͱཅʹ੍໿Λ͔͚ΒΕΔ - ERNN͸ͬͪ͜ !8

9.

ਂ૚ֶशͰӄؔ਺͕༗༻ͳέʔε (ࢲ‫)ݟ‬ 1. ϩεͷҰ෦ʹ‫͍ͳ͖Ͱࢉܭ‬ɺ͋Δ͍͸‫͕ࢉܭ‬໘౗ͳ஋͕͋Δ৔߹ - ϩε͕Θ͔Βͳͯ͘΋ӄؔ਺Λ͏·͘࢖ͬͯϩεͷޯ഑ (ͷۙࣅ஋) ͕Θ͔Ε͹ֶशͰ͖Δ - iMAML͸ͬͪ͜ 2. ಛ௃ྔͳͲʹ͋Δ੍໿Λ͔͚͍ͨ৔߹ - ௨ৗ͸ϩεʹਖ਼ଇԽ߲ΛՃ͑ͯӄʹ੍໿ΛՃ͑Δ͕ɺӄؔ਺Λ͏· ͘࢖͏ͱཅʹ੍໿Λ͔͚ΒΕΔ - ERNN͸ͬͪ͜ !9

10.

ӄؔ਺ඍ෼Λ༻͍ͨ‫ط‬ଘ‫ڀݚ‬ Implicit Reparameterization Gradients !10

11.

ॻࢽ৘ใ • NeurIPS 2018 accepted • ஶऀ - Michael Figurnov, Shakir Mohamed, Andriy Mnih - DeepMind • ӄؔ਺ඍ෼Λ༻͍Δ͜ͱͰଟ͘ͷ෼෍ʹద༻Մೳͳreparameterization trickΛఏҊ • ඇৗʹΘ͔Γ΍͍͢ӄؔ਺ඍ෼ͷ࢖͍ํͩͱࢥ͏ͷͰɺ·ͣ͜ΕͰΠ ϝʔδΛ௫ΜͰ΋Β͑Ε͹ɺ‫ޙ‬൒ͷiMAML΍ERNNͷ࿩͕ཧղ͠΍͘͢ ͳΔͱࢥ͍·͢ !11

12.

Reparameterization Trick • VAEͷ໨తؔ਺ͷ࠶ߏ੒߲͸पลԽΛ‫ؚ‬ΉͷͰ‫ີݫ‬஋͸‫͍ͳ͖Ͱࢉܭ‬ ! q(z; ϕ) [log p (x | z)]−KL (q (z; ϕ) | | p (z)) 𝔼 z − μϕ q ! ͕ਖ਼‫ن‬෼෍ͷ৔߹ɺ ϵ ! = f (z; ϕ) = ͱ͍͏ม‫׵‬Λߟ͑Δͱ !ϵ ∼ 𝒩 (0,1) • σϕ ͱͳΓपลԽ͕ !ϕ ʹґଘ͠ͳ͘ͳΔͷͰɺ!ϵ ͷαϯϓϧۙࣅͰޯ഑ͷෆภਪఆྔ͕ ಘΒΕΔͱ͍͏ͷ͕௨ৗͷreparameterization trick ! ∇ϕ 𝔼q(z; ϕ) [log p (x | z)] = 𝔼p(ϵ) [ ∇ϕ log p (x | z) z=f −1(ϵ; ϕ)] • ͨͩ͠ɺ͜Ε͸ !f ͷΑ͏ͳม‫͕׵‬ଘࡏͯ͠ɺ͔ͭ !f ͷ‫͕਺ؔٯ‬؆୯ʹ‫͖Ͱࢉܭ‬ΔΑ ͏ͳ෼෍ʹ͔͠࢖͑ͳ͍ !12

13.

Implicit Reparameterization Gradients • ೚ҙͷ෼෍ʹରͯ͠࢖͑Δม‫! ׵‬f ͕࣮͸1ͭଘࡏ͢Δ → ྦྷੵ෼෍ؔ਺ - ྦྷੵ෼෍ؔ਺ͷ஋͸Ұ༷෼෍ !ϵ ∼ U (0,1) ʹै͍ !ϕ ʹґଘ͠ͳ͍ −1 ͨͩ͠ɺ! z = f (ϵ; ϕ) ͸Ұൠʹ‫ࠔ͕ࢉܭ‬೉ͳͷͰɺ௨ৗͷϦύϥ ͸࢖͑ͳ͍ ! ∇ϕ 𝔼q(z; ϕ) [log p (x | z)] = 𝔼p(ϵ) [ ∇ϕ log p (x | z)] = 𝔼p(ϵ) [ ∇z log p (x | z) ∇ϕ z] - ୅ΘΓʹ! ∇ϕ z Λӄؔ਺ඍ෼Λ࢖ͬͯ‫͢ࢉܭ‬Δ͜ͱΛߟ͑Δ !13

14.

Implicit Reparameterization Gradients ! = f (z; ϕ) ⇔ f (z; ϕ) − ϵ = 0 ͸ !z ͱ !ϕ ʹؔ͢Δӄؔ਺ͳͷͰɺͦ • ϵ ͷӄؔ਺ඍ෼Λߟ͑Δͱ ! ∇ϕ z = − ∇ϕ f (z; ϕ) ∇z f (z; ϕ) - ͜Ε͸‫ࢉܭ‬Մೳʂ =− ∇ϕ f (z; ϕ) q (z; ϕ) −1 ! ͸ ! ͔Β௚઀αϯϓϧ͢Ε͹͍͍ͷͰɺ! ͸Θ͔Βͳ͘ z q z; ϕ f ( ) ͯ΋໰୊ͳ͍ • ೚ҙͷඍ෼Մೳͳྦྷੵ෼෍ؔ਺Λ࣋ͭ෼෍ʹϦύϥ͕࢖༻Մೳʹʂ !14

15.

Meta-Learning with Implicit Gradients !15

16.

ॻࢽ৘ใ • NeurIPS 2019 accepted • ஶऀ - Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine - MAMLͰ͓ೃછΈͷϝϯπ • MAMLͷֶशʹӄؔ਺ඍ෼Λ༻͍Δ‫ڀݚ‬ !16

17.

Model-Agnostic Meta-Learning (MAML) • ༷ʑͳλεΫʹରͯ͠গ਺ճͷύϥϝʔλߋ৽ͰదԠՄೳͳॳ‫ظ‬஋Λ ޯ഑๏Ͱֶश͢Δϝλֶशख๏ M 1 ! ML := argmin F(θ), where F(θ) = θ* ℒ (𝒜lgi (θ), 𝒟test i ) ∑ M i=1 θ∈Θ - ೚ҙͷλεΫʹରͯ͠൚Խ‫ࠩޡ‬Λ࠷খԽ͢ΔΑ͏ͳॳ‫ظ‬஋Λֶश - ύϥϝʔλߋ৽͕1ճͷ৔߹ (one-step adaptation) tr ! 𝒜lg θ = θ − α ∇ ℒ θ, 𝒟 ( i( ) θ i) !17

18.

MAMLͷ‫ݶ‬ք • ϝϞϦͷ࢖༻ྔ͕ύϥϝʔλͷߋ৽ճ਺ʹൺྫͯ͠૿͑Δ - ϩεͷॳ‫ظ‬஋ʹ͍ͭͯͷޯ഑ ! ∇θ F (θ) Λ‫͢ࢉܭ‬ΔͨΊʹ͸ !𝒜lgi (θ) ͷ ‫ࢉܭ‬άϥϑΛ͢΂ͯอ͓࣋ͯ͘͠ඞཁ͕͋ΔͨΊ • ͜ͷ੍໿͕͋ΔͨΊʹɺMAMLͰ͸਺ճͷύϥϝʔλߋ৽ͰదԠͰ͖Δఔ ౓ͷλεΫʹର͔ͯ͠͠༻͍Δ͜ͱ͕Ͱ͖ͳ͔ͬͨ • ϝλޯ഑ͷ‫ࢉܭ‬Λ1࣍ۙࣅ͢ΔFOMAMLΛ࢖͑͹ϝϞϦফඅΛҰఆʹͰ͖ Δ͕ɺۙࣅ‫͕ࠩޡ‬ੜ͡ΔͨΊʹਫ਼౓͕ѱ͘ͳΔ - FOMAMLͷৄࡉ͸ۙ౻͘ΜͷൃදࢿྉΛࢀর https://www.slideshare.net/DeepLearningJP2016/dl1maml • iMAMLͰ͸ਫ਼౓Λ٘ਜ਼ʹ͢Δ͜ͱͳࠜ͘ຊతʹղܾ͍ͯ͠Δ !18

19.

Inner Loopͷ໨తؔ਺ • ߋ৽ճ਺͕૿͑ͨͱ͖ͷޯ഑ফࣦΛ๷͙ͨΊʹɺύϥϝʔλͷߋ৽ઌ ͕ॳ‫ظ‬஋͔Β཭Ε͗͢ͳ͍Α͏ͳਖ਼ଇԽΛՃ͑Δ 𝒜lg ⋆ (θ) = argmin Gi (ϕ′, θ) ! ϕ′∈Φ λ ̂ Gi (ϕ′, θ) = ℒ (ϕ′)+ 2 ϕ′ − θ 2 • ͨͩɺ͜Ε͸͓ͦΒ͘ӄؔ਺ඍ෼͕‫͠ࢉܭ‬΍͍͢Α͏ʹ͢ΔͨΊʹಋ ೖ͍ͯ͠Δ͚ͩͳͷͰɺ͋·Γຊ࣭Ͱ͸ͳ͍Ͱ͢ !19

20.

ӄؔ਺ඍ෼Λ༻͍ͨOuter Loop • MAMLͷouter loopͷߋ৽ࣜ θ ← θ − ηdθF(θ) M d𝒜lgi(θ) 1 ∇ϕ ℒi (𝒜lgi(θ)) ! =θ−η M∑ dθ i=1 (ϕ = 𝒜lgi(θ)) d𝒜lgi(θ) ͷ‫͕ࢉܭ‬ҰൃͰͰ͖Ε͹inner loopͷύϥϝʔλͷߋ৽਺͕ • ! dθ ૿͑ͯ΋ϝϞϦফඅྔ͸มΘΒͳ͍͸ͣ ➡ ӄؔ਺ඍ෼Λ࢖͏ͱҰൃͰ‫͖Ͱࢉܭ‬Δʂ !20

21.

ӄؔ਺ඍ෼Λ༻͍ͨOuter Loop • inner loopͰ‫࠷ͳີݫ‬దղΛಘΒΕΔͱԾఆ͢Δͱ ϕ ! i ≡ 𝒜lgi⋆ (θ) = argmin Gi (ϕ′, θ) ϕ′∈Φ • ͜ͷͱ͖ ! ∇ϕ′Gi (ϕ′, θ) ϕ′=ϕ = 0 Ͱ͋Δ͜ͱΛ༻͍Δͱ i ̂ ) + λ(𝒜lg ⋆ (θ) − θ) = 0 ! ∇ ℒ(ϕ i i ɹͱ͍͏ !θ ͱ !𝒜lg ⋆ (θ) ʹ͍ͭͯͷӄؔ਺͕ಘΒΕΔ d𝒜lg (θ) 1 2 ̂ = I + ∇ ℒ (ϕi) • ͜Εʹӄؔ਺ඍ෼ͷެࣜΛ༻͍Δͱ ! ( ) dθ λ ⋆ - ͜Ε͸adapt‫ޙ‬ͷ !ϕi ͑͋͞Ε͹‫ࢉܭ‬Մೳʂ −1 !21

22.

ӄؔ਺ඍ෼Λ༻͍ͨOuter Loop 1 2 ̂ • !(I + λ ∇ ℒ (ϕi)) Λͦͷ··‫ٻ‬ΊΑ͏ͱ͢Δͱ2ͭ໰୊͕͋Δ −1 ① inner loopͰadaptͨ͠ύϥϝʔλͷ஋ ϕ ! i ͸‫ີݫ‬ղʹऩଋ͢Δͱ͸‫ݶ‬ Βͳ͍ (SGDͰԿճ͔ߋ৽͢Δ͚ͩͳͷͰ) ② ‫ྻߦٯ‬ͷ‫Ͱࢉܭ‬ύϥϝʔλ਺ͷ3৐ͷΦʔμʔͷ‫͔͔͕ྔࢉܭ‬Δ 1 2 ̂ • ͦ͜Ͱɺ‫ڞ‬໾ޯ഑๏Λ༻͍ͯ !(I + λ ∇ ℒ (ϕi)) ∇ϕ ℒi (𝒜lgi(θ)) −1 ͷۙࣅղΛ‫ٻ‬ΊΔ͜ͱͰ୅༻͢Δ !22

23.

‫ڞ‬໾ޯ഑๏ (CG๏) • ઢ‫ํܕ‬ఔࣜ !Ax = b ⋯(1) ͷ਺஋ղ๏ͷҰछ 1 T T (1) ! ͸ ! f(x) = x Ax − b x ͷ࠷খԽ໰୊ʹஔ͖‫͑׵‬ΒΕΔ͜ͱʹ஫໨͠ɺॳ‫ظ‬ • 2 ஋Λ !x0 = 0,r0 = b − Ax0, p0 = r0 ͱͯ͠ҎԼͷૢ࡞Λऩଋ͢Δ·Ͱ൓෮తʹߦ͏ rkT pk αk = T pk Apk xk+1 = xk + αk pk ! rk+1 = rk − αk Apk pk+1 = rk+1 + T rk+1 rk+1 rkTrk pk !23

24.

‫ڞ‬໾ޯ഑๏ (CG๏) 1 2 ̂ ∇ϕ ℒi (𝒜lgi(θ)) ͱ͓͘ͱɺ!gi ͸ઢ‫ํܕ‬ఔࣜ ! i = I + ∇ ℒ (ϕi) • g ( ) λ 1 2 ̂ ! I + ∇ ℒ (ϕi) gi = ∇ϕ ℒi (𝒜lgi(θ)) ͷղͳͷͰɺ‫ڞ‬໾ޯ഑๏͕࢖͑Δ ( ) λ −1 • ‫ڞ‬໾ޯ഑๏͸ղͷਫ਼౓͕ !rk ͱͯ͠ධՁͰ͖ΔͷͰɺۙࣅਫ਼౓ͱ‫ྔࢉܭ‬ͷτ ϨʔυΦϑ͕औΕΔ (࣮‫ݧ‬తʹ͸5ճఔ౓ͷ൓෮Ͱे෼) - ͨͩ͠ɺ!𝒜lgi(θ) ͷਫ਼౓ʹ͍ͭͯ͸ߟྀͰ͖ͳ͍ (p22ͷ①͸ղܾͯ͠ͳ͍) ͜ͱʹ஫ҙ ‣ ͜Εʹ͍ͭͯ͸Appendix E ͰղੳΛߦ͍ͬͯΔ !24

25.

iMAMLͷར఺ • ϝϞϦফඅ͕inner loopͷߋ৽ճ਺ʹରͯ͠Ұఆ ➡ adaptʹଟ਺ճͷߋ৽Λཁ͢ΔΑ͏ͳ೉͍͠λεΫʹεέʔϧ͢Δ • outer loop͕inner loopͷߋ৽ͷ࢓ํʹґଘ͠ͳ͍ ➡ inner loopͷ࠷దԽΞϧΰϦζϜʹ੍‫ͳ͘ͳ͕ݶ‬Δ ‣ ී௨ͷMAMLͰ͸1࣍ޯ഑ͷΈΛ࢖͏ΞϧΰϦζϜ͔͠࢖͑ͳ ͔ͬͨ ‣ iMAMLͰ͸Hessian-FreeͳͲͷ2࣍ޯ഑Λ༻͍ΔΞϧΰϦζϜ͕ ࢖༻ՄೳʹͳΓɺΑΓߴ଎ʹadapt͠΍͘͢ͳΔ !25

26.

࣮‫ݧ‬ • ϝλޯ഑͕ղੳ‫͖Ͱࢉܭ‬ΔτΠσʔλͰ࣮‫ݧ‬ - iMAML͸inner loopͷճ਺ʹରͯ͠ϝϞϦফඅྔ͕Ұఆ (!O(1)) - ‫཰ޮࢉܭ‬΋ྑ͍͕FOMAMLΑΓ͸஗͍ (CG๏ͷ൓෮‫͕͋ࢉܭ‬Δ͔Β) - ϝλޯ഑ͷۙࣅ‫ࠩޡ‬΋MAMLΑΓগͳ͍ (FOMAMLͱൺֱͯ͠ͳ͍ͷ͸ͳͥ??) !26

27.

࣮‫ݧ‬ • Omniglot - inner loopʹHessian-FreeΛ࢖͏iMAML͕࠷‫ڧ‬ - iMAML͸ಛʹway (Ϋϥε਺) ͕ଟ͍೉͍͠λεΫʹ‫͍ڧ‬ - FOMAML͸λεΫ͕೉͘͠ͳΔͱਫ਼౓͕େ͖͘Լ͕Δ !27

28.

࣮‫ݧ‬ • Mini-ImageNet - Reptile (FOMAMLͷվળख๏) ʹ͸ͪΐͬͱෛ͚ͨ - ࿦จͰ͸ϋΠύϥௐ੔‫ؤ‬ுΕ͹΋͏ͪΐͬͱ্͕Δ͔΋ͱॻ͍ͯ͋ Δ͕Ռͨͯ͠?? !28

29.

iMAML·ͱΊ • ӄؔ਺ඍ෼Λ༻͍ͯϝλޯ഑Λ‫͢ࢉܭ‬Δ͜ͱͰɺϝϞϦޮ཰͕ྑ͘ɺ ೉͍͠λεΫʹ΋εέʔϧ͢ΔiMAMLΛఏҊ • MAMLͷॾʑͷ੍໿Λਫ਼౓Λ٘ਜ਼ʹ͢Δ͜ͱͳࠜ͘ຊతʹղܾ͍ͯ͠ Δ • ࿦จ಺Ͱ͸ཧ࿦తͳߟ࡯΋͔ͬ͠Γ͞Ε͍ͯͯ‫͍ڧ‬ • ࣮૷΋ׂͱ؆୯ͦ͏ • Ͳͷ͘Β͍೉͍͠λεΫʹεέʔϧ͢Δͷ͔͕‫ͨͬͳʹؾ‬ - ϝλ‫ڧ‬Խֶशͱ͔ͰͲͷ͘Β͍͏·͍͘͘ͷ͔ !29

30.

ਂ૚ֶशͰӄؔ਺͕༗༻ͳέʔε (ࢲ‫)ݟ‬ 1. ϩεͷҰ෦ʹ‫͍ͳ͖Ͱࢉܭ‬ɺ͋Δ͍͸‫͕ࢉܭ‬໘౗ͳ஋͕͋Δ৔߹ - ϩε͕Θ͔Βͳͯ͘΋ӄؔ਺Λ͏·͘࢖ͬͯϩεͷޯ഑ (ͷۙࣅ஋) ͕Θ͔Ε͹ֶशͰ͖Δ - iMAML͸ͬͪ͜ 2. ಛ௃ྔͳͲʹ͋Δ੍໿Λ͔͚͍ͨ৔߹ - ௨ৗ͸ϩεʹਖ਼ଇԽ߲ΛՃ͑ͯӄʹ੍໿ΛՃ͑Δ͕ɺӄؔ਺Λ͏· ͘࢖͏ͱཅʹ੍໿Λ͔͚ΒΕΔ - ERNN͸ͬͪ͜ !30

31.

RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients? !31

32.

ॻࢽ৘ใ • ஶऀ - Anil Kag, Ziming Zhang, Venkatesh Saligrama - Ϙετϯେ, MERL • NeurIPS 2019͸reject͞ΕͨͬΆ͍ • ӅΕঢ়ଶ͕ৗඍ෼ํఔࣜͷฏߧଟ༷ମ্ΛભҠ͢ΔΑ͏ʹ͢Δ͜ͱ ͰɺRNNͷޯ഑ফࣦ໰୊Λࠜຊతʹղܾ • ΊͪΌͪ͘Ό໘ന͍ • ΊͪΌͪ͘ΌಡΈਏ͍ !32

33.

RNNͷޯ഑ফࣦ/രൃ • RNNͷߋ৽ࣜ ! k = ϕ (Uhk−1 + Wxk + b) h ! ͸௨ৗsigmoidؔ਺͔tanhؔ਺ - ϕ • RNN͸ԟʑʹͯ͠ޯ഑ফࣦ/രൃʹ೰·͞ΕΔ ∂hk ∂hm ! = = ∇ϕ (Uhk−1 + Wxk + b) U ∏ ∂hk−1 ∏ ∂hn m≥k>n m≥k>n - LSTM΍GRU͸ήʔτ‫Ͳͳߏػ‬Λ‫͢࢖ۦ‬Δ͜ͱͰ؇࿨͍ͯ͠Δ͕ɺ ύϥϝʔλ਺͕૿͑ͯ͠·͍ɺ·ͨࠜຊతͳղܾʹ͸ͳ͍ͬͯͳ͍ !33

34.

RNNͷODEతղऍ • RNNͷߋ৽ࣜΛগ͠มߋͯ͠skip connectionΛՃ͑ͨ‫͢ʹܗ‬Δͱɺৗඍ ෼ํఔࣜ (ODE) ΛΦΠϥʔ๏Λ༻͍ͯ਺஋‫͍ͯ͠ࢉܭ‬ΔͱղऍͰ͖Δ dh(t) ≜ h′(t) = ϕ (Uh(t) + Wxk + b) ! dt ⟹ hk = hk−1 + ηϕ (Uhk−1 + Wxk + b) • ͜Ε͸Neural ODEͷ࿦จͰ΋ࢦఠ͞Ε͍ͯΔ - ৄ͘͠͸෌ࢁ͞ΜͷࢿྉΛࢀর https://www.slideshare.net/DeepLearningJP2016/dlneural-ordinarydifferential-equations !34

35.

ODEͷฏߧଟ༷ମ dh = f (h, x) Ͱఆٛ͞ΕΔODEͰ !f (h, x) = 0 • ! dt ⋯(1) Λຬͨ͢఺Λ ฏߧ఺ͱ‫Ϳݺ‬ ! ͸ !h ͱ !x ͷӄؔ਺ͳͷͰɺӄؔ਺ఆཧΑΓɺฏߧ఺ !(h0, x0) Λ1ͭ • (1) ‫! ͚ͯͭݟ‬fh (h0, x0)͕ਖ਼ଇͰ͋Ε͹ͦͷۙ๣Ͱ !(1) Λຬͨ͢ඍ෼Մೳͳ ཅؔ਺ !h = g (x) ͕ଘࡏ͢Δ ! 0, x0) ͷपΓʹฏߧ఺͕࿈ͳͬͨ‫׈‬Β͔ͳۭؒ (ฏߧଟ༷ମ) ͕ଘ ➡ (h ࡏ͢Δ • ӅΕঢ়ଶΛ͜ͷฏߧଟ༷ମ্ʹఆٛ͢Δͱ͍͏ͷ͕ERNNͷண૝ !35

36.

ERNN • ERNNͰ͸ h′ ! (t) = ϕ (U (h(t) + hk−1) + Wxk + b) − γ (h(t) + hk−1) ͱ͍͏ODEΛߟ͑ɺ!h′(t) = 0 ͷղΛ !hk ͱͯ͠ӅΕঢ়ଶΛߋ৽͢Δ • ͜ͷͱ͖ !hk ͸ !f (hk−1, h) = ϕ (U (h + hk−1) + Wxk + b) − γ (h + hk−1) = 0 ͱ͍͏ӄؔ਺ʹै͏ͷͰɺӄؔ਺ඍ෼ͷެࣜΛ༻͍Δͱ ∂f/∂hk−1 ∂h =− = − I ͱͳͬͯϠίϏΞϯ͕ৗʹෛͷ୯ҐߦྻʹͳΔ ! ∂hk−1 ∂f/∂h ➡ ޯ഑ফࣦ͕‫ݪ‬ཧతʹ‫͜ى‬Βͳ͍ʂ ͨͩ͠ɺ!∂f/∂h ͕ਖ਼ଇͰ͋Δ͜ͱ͕ඞཁ৚݅ !36

37.

! ͷਖ਼ଇੑ ∂f/∂h ∂f = ∇ϕ (U (h + hk−1) + Wxk + b) U ͕ਖ਼ଇͰ͋ΔͨΊͷ৚݅͸ • ! ∂h 1. !ϕ ͕ఆٛҬͷ೚ҙͷ఺Ͱඍ෼Մೳ (sigmoid΍tanhͳΒOK) 2. !U ͕ਖ਼ଇߦྻ - ͜ͷ੍໿ͷ͔͚ํʹ͍ͭͯ͸࿦จ಺Ͱ‫͞ٴݴ‬Ε͍ͯͳ͍‫͕͢ؾ‬Δ (Θ͔Δํ͍ͨΒ‫)͍ͩͯ͑͘͞ڭ‬ !37

38.

ฏߧ఺ͷ‫ٻ‬Ίํ (0) ॳ‫ظ‬஋Λ ! h = 0 ͱͯ͠ҎԼͷߋ৽ࣜΛऩଋ͢Δ·Ͱ൓෮͢Δ • k (i) (i) (i) (i) = h + η ϕ U h + h + Wx + b − γ h + hk−1) !h(i+1) k−1) k ( k k k [ ( ( k k ) ] • ࣮‫ݧ‬తʹ͸5εςοϓఔ౓Ͱऩଋ͢Δ (i) ࿦จ಺Ͱ͸͜Ε͕εςοϓ෯ ! ͕͋Δ৚݅Λຬͨ͢ͱ͖ɺฏߧ఺ʹ η • k ઢ‫ܗ‬ऩଋ͢Δ͜ͱ͕ࣔ͞Ε͍ͯΔ - ‫ج‬ຊతʹ͸े෼খ͚͞Ε͹໰୊ͳ͍ !38

39.

࣮‫ݧ‬ ∂hT HAR-2ͰͷֶशதͷRNNͱERNNͷ ! ͷઈର஋ͷϓϩοτ (log scale) ∂h1 • RNN͸ޯ഑ͷ஋͕ෆ҆ఆ • ERNN͸ৗʹ΄΅ 1 Ͱ҆ఆ͍ͯ͠Δ !39

40.

࣮‫ݧ‬ ӅΕঢ়ଶͷભҠΛϓϩοτ • ී௨ͷRNN͸ෳࡶͳભҠΛ͢ΔҰํɺ ERNN͸ฏߧଟ༷ମ্Ͱ‫׈‬Β͔ʹભҠ !40

41.

࣮‫ݧ‬ • ଟ͘ͷϕϯνϚʔΫͰSoTA • ύϥϝʔλ਺΋গͳ͍ • ֶश΋҆ఆ͔ͭ଎͍ !41

42.

ERNN·ͱΊ • ӅΕঢ়ଶΛNNͰఆٛ͞ΕΔฏߧଟ༷ମ্ͰભҠͤ͞Δ͜ͱͰɺӄؔ਺ ඍ෼ʹΑΓޯ഑ͷϊϧϜ͕ৗʹ1ͱͳΓɺޯ഑ফࣦ͕શ͘‫͜ى‬Βͳ͍ RNNΛఏҊ • ޯ഑͕ফࣦ͠ͳ͍ͨΊɺ௕‫ظ‬ͷґଘؔ܎Λ͏·͘ଊ͑Δ͜ͱ͕Ͱ͖Δ • ଟ͘ͷϕϯνϚʔΫͰSoTAୡ੒ • RNNͷ௕೥ͷ໰୊Λࠜຊతʹղܾ͓ͯ͠ΓɺϒϨʔΫεϧʔ‫͕͋ײ‬Δ • ͨͩɺ࿦จ͕ඇৗʹಡΈͮΒ͍ͷͰɺaccept͞ΕΔ·Ͱʹվળ͞ΕΔ ͜ͱΛ‫ظ‬଴͍ͨ͠ !42

43.

શମ·ͱΊ & ‫ײ‬૝ • ӄؔ਺ඍ෼Λ༻͍ͨۙ೥ͷ‫ڀݚ‬Λ঺հ • iMAMLͱERNN͸ͲͪΒ΋‫ط‬ଘख๏ͷ໰୊఺Λࠜຊతʹղܾ͓ͯ͠Γɺ ඇৗʹେ͖ͳਐาͩͱ‫ͨ͡ײ‬ • ࠓ‫ޙ‬΋ӄؔ਺ඍ෼Λ‫Ͳ͕ڀݚͨ͠༻׆‬ΜͲΜ޿͕Γͦ͏ͳ༧‫ײ‬ !43