{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Medical AI Course Materials : 07_DNA_Sequence_Data_Analysis.ipynb", "version": "0.3.2", "provenance": [], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.2" } }, "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "yoiTUC_zXAb9" }, "source": [ "# 実践編: ディープラーニングを使った配列解析\n", "\n", "近年,次世代シーケンサ(NGS; Next Generation Sequencer)の発展により,遺伝子の塩基配列が高速,大量,安価に読み取られるようになってきました.\n", "\n", "ここではディープラーニングを用いて,DNA配列からエピジェネティックな影響や転写制御を予測する問題に取り組みます.ディープラーニングは複雑なモデルを表現でき,遠距離の影響も考慮することができ,より高い精度で予測することが期待できます.\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "CkqRZHc8crS4" }, "source": [ "## 環境構築\n", "\n", "ここで用いるライブラリは\n", "\n", "\n", "* Chainer\n", "* Cupy\n", "* matplotlib\n", "\n", "です.Google Colab上では,これらはあらかじめインストールされています.\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "mEPaj4MfdEyh" }, "source": [ "以下のセルを実行して,各ライブラリのバージョンを確認してください.\n", "\n" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "p4X-dmKrdDhd", "outputId": "465abf75-5b3c-4305-b70e-ec577974801f", "colab": { "base_uri": "https://localhost:8080/", "height": 300 } }, "source": [ "import chainer\n", "import cupy\n", "import matplotlib\n", "\n", "chainer.print_runtime_info()\n", "print('matplotlib:', matplotlib.__version__)" ], "execution_count": 1, "outputs": [ { "output_type": "stream", "text": [ "Platform: Linux-4.14.137+-x86_64-with-Ubuntu-18.04-bionic\n", "Chainer: 6.5.0\n", "ChainerX: Not Available\n", "NumPy: 1.17.3\n", "CuPy:\n", " CuPy Version : 6.5.0\n", " CUDA Root : /usr/local/cuda\n", " CUDA Build Version : 10000\n", " CUDA Driver Version : 10010\n", " CUDA Runtime Version : 10000\n", " cuDNN Build Version : 7603\n", " cuDNN Version : 7603\n", " NCCL Build Version : 2402\n", " NCCL Runtime Version : 2402\n", "iDeep: 2.0.0.post3\n", "matplotlib: 3.1.1\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "0nsNNeFEkjB_" }, "source": [ "期待される実行結果例\n", "```\n", "Platform: Linux-4.14.137+-x86_64-with-Ubuntu-18.04-bionic\n", "Chainer: 6.5.0\n", "ChainerX: Not Available\n", "NumPy: 1.17.3\n", "CuPy:\n", " CuPy Version : 6.5.0\n", " CUDA Root : /usr/local/cuda\n", " CUDA Build Version : 10000\n", " CUDA Driver Version : 10010\n", " CUDA Runtime Version : 10000\n", " cuDNN Build Version : 7603\n", " cuDNN Version : 7603\n", " NCCL Build Version : 2402\n", " NCCL Runtime Version : 2402\n", "iDeep: 2.0.0.post3\n", "matplotlib: 3.1.1\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "MzdDwd7aeYmT" }, "source": [ "## 配列解析について\n", "\n", "次世代シーケンサの発展・普及とともに,大量の遺伝子配列が読み取られるようになりました.そうした中で,塩基配列で表現された遺伝子型と病気や形態などの表現型との関係を推定するようなGWAS(Genome Wide Association Study; ゲノムワイド関連解析)が広がってきました.しかし,遺伝子の変異だけでは全ての表現型の変化を説明できないことがわかってきました.特に,非翻訳領域が遺伝子発現に影響を与え,表現型の変化を生じさせていることが様々な実験結果からわかってきています.遺伝子発現時に周辺領域がどのように影響を与えているのかを調べるために様々な手法が提案されています.\n", "\n", "![エピゲノム解析概略図(Encode Projectより引用)](https://www.encodeproject.org/images/c45f4d8c-0340-4fcb-abe3-e4ff0bb919be/download/attachment/EncodeDatatypes2013-7.png)\n", "\n", "引用元 : [https://www.encodeproject.org/images/c45f4d8c-0340-4fcb-abe3-e4ff0bb919be/download/attachment/EncodeDatatypes2013-7.png](https://www.encodeproject.org/images/c45f4d8c-0340-4fcb-abe3-e4ff0bb919be/download/attachment/EncodeDatatypes2013-7.png)\n", "\n", "例えば,ChIP-seq(クロマチン免疫沈降シーケンス)は,ChIP(クロマチン免疫沈降)と高速DNAシーケンスを組み合わせることで,ヒストン修飾状況や転写調節因子の結合部位を網羅的(ゲノムワイド)に同定する手法です.これにより,転写調節機能を司るヒストン修飾やDNA結合タンパクの結合部位をゲノム全体で同定することができ,遺伝子変異だけでは説明しきれない細胞の表現型に関与する膨大な情報の取得が可能になります.\n", "\n", "そこで本節では,ChIP-seqにより得られた転写調節因子の結合部位に当たるDNA塩基配列のパターンを深層学習により学習することで,任意のDNA塩基配列に対して特定の転写調節因子との結合可能性の予測を行います.このアプローチはゲノム全体のヒストン修飾部位の予測やオープンクロマチン領域の予測など幅広い生命現象を統一的に取り扱うことを可能とします。\n", "\n", "この課題を機械学習で取り扱う際の技術的な難しさの一つが,DNA塩基配列の長距離相互作用と呼ばれる現象です.これは,核内のDNAは複雑に折り畳まれた様式で存在しており,塩基配列上の並びとしては遠く離れた2つの領域が空間的には近い距離に位置し,転写調節因子の結合に影響を及ぼすことがあるということです.例えば,今回対象とする問題では10万bp (ベース・ペア:DNAを構成する塩基を数える単位) 超の長さのDNA塩基配列を入力として受け取り,DNA塩基配列中のある領域が転写調節因子の結合部位になり得るかを予測します。このような長距離相互作用を考慮しても効率的に学習可能なモデルを構築してきます‥\n", "\n", "今回は,数百種類の人の細胞型から得られた数千のChIP-seq,DNase-seq(オープンクロマチン領域の網羅的解析の一手法)のデータセットから得られたDNA塩基配列を入力として,CAGE(Cap Analysis of Gene Expression)の結果計測されたmRNAの発現量を推定する問題を考えます[1]." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "db3ngEYHgSmd" }, "source": [ "## データセット\n", "\n", "ここでは,Basenji[1]で使われた実験データセットの一部を利用します.これらはCAGEなどの配列解析処理を行って得られたデータセットです.\n", "\n", "下のセルを実行してデータをダウンロードしてください.\n", "\n", "この配列はそれぞれが長さ131072bpからなり,128bp毎に対しそのカバレッジ値が記録されています.このカバレッジ値の配列の長さは131072/128=1024です.\n", "\n", "この問題の目標は長さ131072bpの配列を入力として受け取った時に,この128bp毎のカバレッジ値を推定することが目標です.\n", "\n", "今回は10種類の異なる実験のカバレッジ値を同時に予測する問題を扱います." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "LjxWi_2chwkX", "outputId": "69c63cd9-0c1e-4e16-e04e-20bdb5fde838", "colab": { "base_uri": "https://localhost:8080/", "height": 325 } }, "source": [ "!wget https://github.com/japan-medical-ai/medical-ai-course-materials/releases/download/v0.1/seq.h5" ], "execution_count": 2, "outputs": [ { "output_type": "stream", "text": [ "--2018-12-16 04:41:34-- https://github.com/japan-medical-ai/medical-ai-course-materials/releases/download/v0.1/seq.h5\n", "Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112\n", "Connecting to github.com (github.com)|192.30.253.113|:443... connected.\n", "HTTP request sent, awaiting response... 302 Found\n", "Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/153412006/c79a0800-f713-11e8-8d6c-255563d45b1b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20181216%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181216T044134Z&X-Amz-Expires=300&X-Amz-Signature=df390bc2ed4392cbdd65444198dcec236c19532e06158d89cda5c2fe4e17f5db&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dseq.h5&response-content-type=application%2Foctet-stream [following]\n", "--2018-12-16 04:41:34-- https://github-production-release-asset-2e65be.s3.amazonaws.com/153412006/c79a0800-f713-11e8-8d6c-255563d45b1b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20181216%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181216T044134Z&X-Amz-Expires=300&X-Amz-Signature=df390bc2ed4392cbdd65444198dcec236c19532e06158d89cda5c2fe4e17f5db&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dseq.h5&response-content-type=application%2Foctet-stream\n", "Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.137.212\n", "Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.137.212|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 594118876 (567M) [application/octet-stream]\n", "Saving to: ‘seq.h5’\n", "\n", "seq.h5 100%[===================>] 566.60M 71.1MB/s in 8.5s \n", "\n", "2018-12-16 04:41:43 (66.7 MB/s) - ‘seq.h5’ saved [594118876/594118876]\n", "\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "9yuEHGl_XC2B", "outputId": "6e0e3ee2-0dc3-4b22-cb0a-f44bf829a950", "colab": { "base_uri": "https://localhost:8080/", "height": 71 } }, "source": [ "!ls -lh" ], "execution_count": 3, "outputs": [ { "output_type": "stream", "text": [ "total 567M\n", "drwxr-xr-x 1 root root 4.0K Dec 10 17:34 sample_data\n", "-rw-r--r-- 1 root root 567M Dec 3 06:54 seq.h5\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l8NsMudgiHBI" }, "source": [ "seq.h5というファイルが正しくダウンロードされているかを確認してください.サイズは567MBです.\n", "\n", "seq.h5はHDF5形式でデータを格納したファイルです.HDF5ファイルは,ファイルシステムと同様に,階層的にデータを格納することができ,行列やテンソルデータをそれぞれの位置で名前付きで格納することができます.\n", "\n", "HDF5形式のファイルを操作するためにh5pyというライブラリがあります.h5pyのFile()関数でファイルを開き,keys()関数でその中に含まれているキーを列挙します.また取得したキーを'[]'内で指定することでそのキーに紐付けられて格納されている各データを参照することができます.\n", "\n", "テンソルデータはnumpyと同様にshapeという属性でそのサイズを取得することができます.\n", "\n", "以下のセルを実行して格納されているデータを確認してください.\n", "\n", "各データの名前にtrain(学習),validate(検証),test(テスト)の接頭辞がつけられ,inが入力の塩基配列,outが出力のカバレッジ値に対応します.\n", "\n", "例えば,'train_in'は学習用の入力データであり(5000, 131072, 4)というサイズを持ちます.これは長さが130172からなる配列が5000個あり,それぞれA, T, C, Gの対応する次元の値が1, それ以外は0であるような配列です.\n", "\n", "また,'train_out'は学習用の出力データであり,('5000, 1024, 10')というサイズを持ちます.これは長さが1024からなる配列が5000個あり,それぞれが10種類の異なるChIP-seqの結果のカバレッジ値が格納されています." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "bBQVPyKxi-uE", "outputId": "7c950b79-5422-4dea-9889-f9d15294ca31", "colab": { "base_uri": "https://localhost:8080/", "height": 143 } }, "source": [ "import h5py\n", "import numpy as np\n", "\n", "with h5py.File('seq.h5', 'r') as hf:\n", " for key in hf.keys():\n", " print(key, hf[key].shape, hf[key].dtype)" ], "execution_count": 4, "outputs": [ { "output_type": "stream", "text": [ "target_labels (10,) |S29\n", "test_in (500, 131072, 4) bool\n", "test_out (500, 1024, 10) float16\n", "train_in (5000, 131072, 4) bool\n", "train_out (5000, 1024, 10) float16\n", "valid_in (500, 131072, 4) bool\n", "valid_out (500, 1024, 10) float16\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "T7OkqzE-jXlq" }, "source": [ "\n", "\n", "```\n", "(u'target_labels', (10,), dtype('S29'))\n", "(u'test_in', (500, 131072, 4), dtype('bool'))\n", "(u'test_out', (500, 1024, 10), dtype('" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9dB69Y47k-y_" }, "source": [ "## Dilated Convolutionを用いた解析\n", "\n", "### 配列解析の戦略\n", "\n", "今回は配列データが入力であるような問題である.\n", "\n", "配列データを扱うためには大きく3つの戦略があります.\n", "\n", "一つ目は,配列中の順序情報は捨てて,配列をその特徴の集合とみなすことです.これはBag of Words(BoW)表現とよびます.このBoW表現は特徴に十分情報が含まれていれば強力な手法ですがDNA配列のような4種類の文字からなる配列やその部分配列だけではその特徴を捉えることは困難です.\n", "\n", "二つ目は配列中の要素を左から右に順に読み込んでいき計算していく手法です.これは4章でも少し触れたRNNを用いて解析します.RNNは時刻毎に入力を一つずつ読み取り内部状態を更新していきます.RNNの問題点はその計算が逐次的であり計算量が配列長に比例するという点です.現在の計算機は計算を並列化することで高速化を達成していますがRNNは計算を並列化することが困難です.もう一つの問題は遠距離間の関係を捉えることが難しいという点です.RNNはその計算方式から,計算の途中結果を全て固定長の内部状態ベクトルに格納する必要があります.遠距離間の関係を捉えようとすると,多くの情報を覚えておかなければなりませんが状態ベクトルサイズは有限なので,遠距離間の関係を捉えることが困難となっていきます.\n", "\n", "三つ目は配列データを1次元の画像とみなし,画像処理の時と同様にCNNを用いて解析する手法です.CNNはRNNの場合と違って各位置の処理を独立に実行できるため並列に処理することができます.\n", "\n", "今回はこの3つ目の戦略,CNNを用いて解析する手法を採用します.また,Dilated Convolutionを使うことで各位置の処理は遠距離にある情報を直接読み取ることができます.次の章でDilated Convolutionについて詳しくみていきます.\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "RWl4yYm_nqg7" }, "source": [ "### Dilated Convolution\n", "\n", "従来の畳み込み層を使って配列解析をする場合を考えてみます.\n", "以下の図のようにある位置の入力の情報は各層で隣接する位置からしか読み込まれません.どのくらい離れた位置から情報を取得するかはカーネルサイズによって決定され,カーネルサイズがKの時,Dだけ離れた距離にある情報を取得するためにはD/K層必要となります.今回の問題の場合Dは数百から数万,Kは3や5といった値ですので必要な層数も百から万といった数になってしまい現実的ではありません." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "gJ2MdbaHneLk" }, "source": [ "\n", "![従来の畳み込み層の計算イメージ](http://musyoku.github.io/images/post/2016-09-17/naive_conv.png)\n", "\n", "[WaveNet: A Generative Model for Raw Audio](https://deepmind.com/blog/wavenet-generative-model-raw-audio/)より引用" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Gazys1FUoV4m" }, "source": [ "それに対し,Dilated Convolution(atrous convolutionやconvolution weith holesともよばれます)は読み取る場所をずらしたところからうけとります.例えばDilation=4の場合,4だけ離れた位置から情報を受け取ります.このDilationを倍々にしていき,カーネルサイズを2とした場合,Dだけ離れた位置の情報を受取るには $\\log_2 D$層だけ必要になります.今回のDが数百から数万の場合,10から20層程度あれば済むことになります.\n", "\n", "今回はこのDilated Convolutionを使うことで遠距離にある情報を考慮できるモデルを作成します." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Vl5f4eonQGU9" }, "source": [ "\n", "![Dilated Convolutionの計算イメージ](https://storage.googleapis.com/deepmind-live-cms/documents/BlogPost-Fig2-Anim-160908-r01.gif)\n", "\n", "[WAVENET: A GENERATIVE MODEL FOR RAW AUDIO, blog](https://deepmind.com/blog/wavenet-generative-model-raw-audio/)より\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "p0bcCznko_wD" }, "source": [ "### ブロック\n", "\n", "それでは最初に,ネットワークの全体を設計します.\n", "このネットワークは二つのブロックから構成されます.\n", "\n", "1つ目のブロックは長さが$2^{17}$の配列を入力として長さが$2^{10}$のベクトルを出力とします.これにより入力の128 ($=2^{17}/2^{10}$)bpが出力の1つの位置に対応するようになります.これを実現しているのが,SqueezeBlockです.すなわち,SqueezeBlockは長さ131072bpからなるDNAの塩基配列を入力として受け取り,各フラグメントの長さに相当する128bp毎の情報が一つの値となるような畳込み処理を行います.結果として131072/128=1024の長さのベクトル列が出力されます.このベクトル列はフラグメント毎の特徴が一つのベクトルに圧縮されたものとみなすことができます.\n", "\n", "二つ目のブロックは遠距離にある情報を考慮して各ベクトルの値を計算していく部分であり,DilatedBlockが担当します.DilatedBlockは,SqueezeBlockから出力された1024の長さのベクトル列を受け取り,Dilated Convolutionの仕組みを使うことで互いに離れた位置の情報を効率的に考慮した上で処理していき,入力と同じ1024の長さの出力を返します.この出力が,フラグメント毎に与えられたDNA関連タンパク質の結合可能性を表す数値(カバレッジ値)と一致するように学習を進めます.\n", "\n", "それでは,以下のコードを実行してみましょう.\n" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "5M6BDmVdpLkE", "colab": {} }, "source": [ "import chainer\n", "import chainer.functions as F\n", "import chainer.links as L\n", "import cupy as cp\n", "\n", "bc = 24 # base channel\n", "\n", "default_squeeze_params = [\n", " # out_ch, kernel, stride, dropout\n", " [bc*2, 21, 2, 0], #1 128 -> 64\n", " [int(bc*2.5), 7, 4, 0.05], #2 64 -> 16\n", " [int(bc*3.2), 7, 4, 0.05], #3 16 -> 4\n", " [bc*4, 7, 4, 0.05] #4 4 -> 1\n", "]\n", "\n", "\n", "default_dilated_params = [\n", "# out_ch, kernel, dilated, dropout\n", " [bc, 3, 1, 0.1],\n", " [bc, 3, 2, 0.1], \n", " [bc, 3, 4, 0.1], \n", " [bc, 3, 8, 0.1], \n", " [bc, 3, 16, 0.1], \n", " [bc, 3, 32, 0.1],\n", " [bc, 3, 64, 0.1]\n", "]\n", "\n", "\n", "class Net(chainer.Chain):\n", "\n", " def __init__(self, squeeze_params=default_squeeze_params, dilated_params=default_dilated_params, n_targets=10):\n", " super(Net, self).__init__()\n", " self._n_squeeze = len(squeeze_params)\n", " self._n_dilated = len(dilated_params)\n", " with self.init_scope():\n", " in_ch = 4\n", " for i, param in enumerate(squeeze_params):\n", " out_ch, kernel, stride, do_rate = param\n", " setattr(self, \"s_{}\".format(i), SqueezeBlock(in_ch, out_ch, kernel, stride, do_rate))\n", " in_ch = out_ch\n", " for i, param in enumerate(dilated_params):\n", " out_ch, kernel, dilated, do_rate = param\n", " setattr(self, \"d_{}\".format(i), DilatedBlock(in_ch, out_ch, kernel, dilated, do_rate))\n", " in_ch += out_ch\n", " self.l = L.ConvolutionND(1, None, n_targets, 1)\n", " \n", " def forward(self, x):\n", " # x : (B, X, 4)\n", " xp = cp.get_array_module(x)\n", " h = xp.transpose(x, (0, 2, 1))\n", " h = h.astype(xp.float32)\n", " \n", " for i in range(self._n_squeeze):\n", " h = self[\"s_{}\".format(i)](h)\n", " \n", " hs = [h]\n", " for i in range(self._n_dilated):\n", " h = self[\"d_{}\".format(i)](hs)\n", " hs.append(h)\n", "\n", " h = self.l(F.concat(hs, axis=1))\n", " h = xp.transpose(h, (0, 2, 1))\n", " return h\n", " " ], "execution_count": 6, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Kc3RNwK_qHzS" }, "source": [ "このネットワークは初期化時の引数としてSqueezeBlockに関するパラメータと,DilatedBlockに関するパラメータを受け取ります.\n", "\n", "それぞれ,出力チャンネル,カーネルサイズ,プーリング,ドロップアウト率の四つ組からなるリストと,出力チャンネル,カーネルサイズ,dilatedサイズ・ドロップアウト率の四つ組からなるリストを受け取ります." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "s3T5pRubrlba" }, "source": [ "次に,ブロックの定義をします." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "shOuWcBkrpOE", "colab": {} }, "source": [ "import chainer\n", "import chainer.functions as F\n", "import chainer.links as L\n", "import cupy as cp\n", "\n", "class WNConvolutionND(L.ConvolutionND):\n", " def __init__(self, *args, **kwargs):\n", " super(WNConvolutionND, self).__init__(*args, **kwargs)\n", " self.add_param('g', self.W.data.shape[0])\n", " norm = np.linalg.norm(self.W.data.reshape(\n", " self.W.data.shape[0], -1), axis=1)\n", " self.g.data[...] = norm\n", "\n", " def __call__(self, x):\n", " norm = F.batch_l2_norm_squared(self.W) ** 0.5\n", " channel_size = self.W.data.shape[0]\n", " norm_broadcasted = F.broadcast_to(\n", " F.reshape(norm, (channel_size, 1, 1)), self.W.data.shape)\n", " g_broadcasted = F.broadcast_to(\n", " F.reshape(self.g, (channel_size, 1, 1)), self.W.data.shape)\n", " return F.convolution_nd(\n", " x, g_broadcasted * self.W / norm_broadcasted, self.b, self.stride,\n", " self.pad, self.cover_all, self.dilate)\n", "\n", "class SqueezeBlock(chainer.Chain): \n", " def __init__(self, in_ch, out_ch, kernel, stride, do_rate):\n", " super(SqueezeBlock, self).__init__()\n", " \n", " self.do_rate = do_rate\n", " with self.init_scope():\n", " pad = kernel // 2\n", " self.conv = WNConvolutionND(1, in_ch, out_ch*2, kernel, pad=pad, stride=stride)\n", " \n", " def forward(self, x):\n", " h = self.conv(x)\n", " h, g = F.split_axis(h, 2, 1)\n", " h = F.dropout(h * F.sigmoid(g), self.do_rate)\n", " return h\n", "\n", "class DilatedBlock(chainer.Chain):\n", " def __init__(self, in_ch, out_ch, kernel, dilate, do_rate):\n", " super(DilatedBlock, self).__init__()\n", " self.do_rate = do_rate\n", " with self.init_scope():\n", " self.conv = WNConvolutionND(1, in_ch, out_ch*2, kernel, pad=dilate, dilate=dilate)\n", " \n", " def forward(self, xs):\n", " x = F.concat(xs, axis=1)\n", " h = self.conv(x)\n", " h, g = F.split_axis(h, 2, 1)\n", " h = F.dropout(h * F.sigmoid(g), self.do_rate)\n", " return h\n" ], "execution_count": 7, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "fHZuRr36bxHv" }, "source": [ "![ネットワーク構造](https://github.com/japan-medical-ai/medical-ai-course-materials/raw/master/notebooks/images/7/network.png)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "RrTAARyW2AYQ" }, "source": [ "WeightNormalization[2]はパラメータの表現を長さと向きに分解して表現する手法で,今回の系列問題のような場合に使われる正規化法です.コード中ではWeightNormalizationが適用された畳み込み層である`WNConvolutionND`が定義されています.\n", "\n", "SqueezeBlockは配列を縮めていき,長さが$2^{17}$の配列を$2^{10}$に縮めるためのブロックです(上図).\n", "1次元配列を扱うためWNConvolutionNDを使い,最初の引数で1次元配列であることを示す`1`を指定しています.\n", "また,活性化関数では$h = Wx * sigmoid(Ux)$と表されるGated Linear Unit[3]を利用しています.計算では効率化のため,WxとUxを別々に計算するのではなく2倍の出力チャンネル数を持つConvolutionを適用した後に出力結果をチャンネル方向に2つに分割し$(Wx, Ux)$,片方にsigmoid関数を適用した後,それらを要素毎にかけ合わせます.\n", "\n", "DilatedBlockはすでに長さ1024の長さになった配列に対し,Dilated Convolutionを使って遠距離にある情報も使って計算していくブロックです(上図).引数としてdilatedを受け取ります.Dilated Convolutionを使う場合は通常のConvolution層(今回はConvolutionNDだが,Convolution2Dも同様)の引数にdilatedを加えるだけで計算できます.\n", "\n", "また,DilatedBlockではDenseNet[4]と呼ばれる,以前の途中結果が全て次の層の入力として使われる手法を採用します(DilatedBlock内 forward()内の`concat`がそれに対応).これはニューラルネットワークで多くのスキップ接続を作ることで,層が増えても勾配が減衰せず,学習がしやすくなることを利用したものです.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "y-ZdRuhSq2Rq" }, "source": [ "それでは,試しにネットワークを構築して,そこにサンプルデータを流してみましょう.\n", "\n" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "DARrKIMurGiH", "outputId": "5435968a-95a9-4fc6-d1c6-5c5ee39a25dd", "colab": { "base_uri": "https://localhost:8080/", "height": 35 } }, "source": [ "import numpy as np\n", "n = Net()\n", "size = 131072 # 128 * 1024\n", "batchsize = 4\n", "x = np.empty((batchsize, size, 4), dtype=np.bool)\n", "y = n.forward(x)\n", "print(y.shape)" ], "execution_count": 8, "outputs": [ { "output_type": "stream", "text": [ "(4, 1024, 10)\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ydR6gwYCsATQ" }, "source": [ "```\n", "(4, 1024, 10)\n", "```\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "OyJ8lu_psGlk" }, "source": [ "ここで,もともとバッチサイズ(B)=4, 入力長(L)=131072, 入力チャンネル数(C)=4だった配列が計算後はB=4, L=1024, C=10の配列となりました." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "vLNgEVh0vjOt" }, "source": [ "[リンクテキスト](https://)今回予測するカバレッジ値は,フラグメント毎にDNA関連タンパク質がどの程度の頻度で結合したかを表すカウントデータであるとみなせます.そこで学習ではカウントデータに対する損失関数である対数ポアソン損失関数を利用します.\n", "\n", "対数ポアソン損失関数を使う場合,モデルはポアソン分布の唯一のパラメータである平均を予測し,その予測された平均をもったポアソン分布を使った場合の学習データの尤度を計算します.そしてその尤度の最大化,それと同じである負の対数尤度の最小化を行います.この際,プログラム上では学習対象パラメータが含まれない項を無視しています.\n", "なお,この関数の最小値はそのままだと$0$にはならので,最小値である$t \\log t$をあらかじめひいておき,損失関数の最小値が$0$となるようにします." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "rgQmu0Pgvh0P", "colab": {} }, "source": [ "import chainer.functions as F\n", "import math\n", "import sklearn\n", "import numpy as np\n", "\n", "def log_poisson_loss(log_x, t):\n", " loss = F.mean(F.exp(log_x) - t * log_x) \n", " t = chainer.cuda.to_cpu(t.astype(np.float32)) \n", " offset = F.mean(cp.array(t - t * np.ma.log(t)))\n", " return loss - offset\n", "\n", "\n", "def log_r2_score(log_x, t):\n", " return F.r2_score(F.exp(log_x), t)" ], "execution_count": 9, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "40kTUr3O2lu5" }, "source": [ "また,学習率の調整にCosineSchedulerを使います.ニューラルネットワークの学習では,徐々に学習率を小さくしていくと,より汎化性能の高い解を見つけられることがわかっています.ニューラルネットワークの学習の目的関数は多くの性能の悪い局所解があるため,最初は学習率を高くして局所解にはまらないようにして全体の中での良い解を探し,後半は徐々に学習率を0に近づけていき収束させるというものです.\n", "CosineSchedulerはCosine関数の0度から90度までの変化のように学習率を変化させます.また学習は初期が不安定なので最初のn_warmup回,学習率を0から初期学習率まで線形に増やすことも一般的です.今回は学習率が低めで学習も安定しているのでn_warmupは0としてあります.\n" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "QvjM_C-z2o8m", "colab": {} }, "source": [ "from chainer import training\n", "import numpy as np\n", "import math\n", "\n", "class CosineScheduler(training.Extension):\n", "\n", " def __init__(self, attr='lr', init_val=0.0001, n_decays=200, n_warmups=3, target=None, optimizer=None):\n", " self._attr = attr\n", " self._target = target\n", " self._optimizer = optimizer\n", " self._min_loss = None\n", " self._last_value = None\n", " self._init_val = init_val\n", " self._n_decays = n_decays - n_warmups\n", " self._decay_count = 0\n", " self._n_warmups = n_warmups\n", "\n", " def __call__(self, trainer):\n", " updater = trainer.updater\n", " optimizer = self._get_optimizer(trainer)\n", " epoch = updater.epoch\n", " if epoch < self._n_warmups:\n", " value = self._init_val / (self._n_warmups + 1) * (epoch + 1)\n", " else:\n", " value = 0.5 * self._init_val * (1 + math.cos(math.pi * (epoch - self._n_warmups) / self._n_decays))\n", " self._update_value(optimizer, value)\n", "\n", "\n", " def _get_optimizer(self, trainer):\n", " return self._optimizer or trainer.updater.get_optimizer('main')\n", "\n", " def _update_value(self, optimizer, value):\n", " setattr(optimizer, self._attr, value)\n", " self._last_value = value" ], "execution_count": 10, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "56jZaaD82p-X" }, "source": [ "最後に学習中に訓練データに意味を変えない変化を加えるData Augmentationを適用します.これは画像において回転させたり,平行移動させたりする場合と同じです.\n", "今回は128bp毎にカバレッジ値を予測していますが,数塩基(例えば4~8など)移動したとしてもカバレッジ値は同じ程度になることが期待されます.そこで最大max_shift分だけ配列を前後にシフトします(完全にランダムな塩基配列を余った部分に入れると実際の塩基配列の分布と変わる可能性があるのでここではroll()関数を巡回シフトしています)." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "UX2NE83o274Y", "colab": {} }, "source": [ "import chainer\n", "import random\n", "\n", "class PreprocessedDataset(chainer.dataset.DatasetMixin):\n", "\n", " def __init__(self, xs, ys, max_shift):\n", " self.xs = xs\n", " self.ys = ys\n", " self.max_shift = max_shift\n", "\n", " def __len__(self):\n", " return len(self.xs)\n", "\n", " def get_example(self, i):\n", " # It applies following preprocesses:\n", " # - Cropping\n", " # - Random flip\n", "\n", " x = self.xs[i]\n", " y = self.ys[i]\n", "\n", "\n", " s = random.randint(-self.max_shift, self.max_shift)\n", " x = np.roll(x, s, axis=0)\n", " return x, y" ], "execution_count": 11, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9RCVvBw0v9i-" }, "source": [ "これで全部準備ができました.残りはChainerのTrainerを改造して学習するだけです.以下のコードを実行してください.\n", "\n", "元々のデータ全体では学習に時間がかかるので,データ/`ratio`分だけを学習,検証用データとして利用します.今回`ratio`は1に設定されています.この場合30分程度で学習が完了します.短い時間で試したい方はratio=1をratio=10やratio=20として実験してみてください.\n" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "b1_e0bE7wB48", "outputId": "dc2ce3b1-a66f-4c42-935c-223dab2c9685", "colab": { "base_uri": "https://localhost:8080/", "height": 917 } }, "source": [ "import chainer\n", "import chainer.functions as F\n", "import chainer.links as L\n", "import numpy as np\n", "from chainer.training import extensions\n", "from chainer import training\n", "import h5py\n", "\n", "ml_h5 = h5py.File('seq.h5')\n", "\n", "train_x = ml_h5['train_in']\n", "train_y = ml_h5['train_out']\n", "\n", "valid_x = ml_h5['valid_in']\n", "valid_y = ml_h5['valid_out']\n", "\n", "test_x = ml_h5['test_in']\n", "test_y = ml_h5['test_out']\n", "\n", "ratio = 1\n", "train_x = train_x[:len(train_x)//ratio]\n", "train_y = train_y[:len(train_y)//ratio]\n", "valid_x = valid_x[:len(valid_x)//ratio]\n", "valid_y = valid_y[:len(valid_y)//ratio]\n", "\n", "\n", "max_shift_for_data_augmentation = 5\n", "train = PreprocessedDataset(train_x, train_y, max_shift_for_data_augmentation)\n", "val = chainer.datasets.TupleDataset(valid_x, valid_y)\n", "\n", "batchsize = 8\n", "\n", "train_iter = chainer.iterators.SerialIterator(train, batchsize)\n", "val_iter = chainer.iterators.SerialIterator(val, batchsize, repeat=False, shuffle=False)\n", "\n", "model = L.Classifier(Net(), lossfun=log_poisson_loss, accfun=log_r2_score)\n", "\n", "lr = 0.001\n", "optimizer = chainer.optimizers.Adam(alpha=lr, beta1=0.97, beta2=0.98)\n", "optimizer.setup(model)\n", "optimizer.add_hook(chainer.optimizer_hooks.GradientClipping(threshold=0.01))\n", "\n", "\n", "updater = training.updaters.StandardUpdater(\n", " train_iter, optimizer, device=0)\n", "\n", "n_epochs = 10\n", "n_warmups = 0\n", "out = \"out\"\n", "trainer = training.Trainer(updater, (n_epochs, 'epoch'), out=out)\n", "trainer.extend(CosineScheduler(attr='alpha', init_val=lr, n_decays=n_epochs, n_warmups=n_warmups), trigger=(1, 'epoch'))\n", "\n", "trainer.extend(extensions.Evaluator(val_iter, model, device = 0))\n", "trainer.extend(extensions.LogReport(trigger=(0.2, 'epoch')))\n", "trainer.extend(extensions.snapshot_object(model, 'model_epoch_{.updater.epoch}'), trigger=(1, 'epoch'))\n", "\n", "trainer.extend(extensions.PrintReport(\n", " ['epoch', 'main/loss', 'validation/main/loss', 'elapsed_time']), trigger = (0.1, 'epoch'))\n", "\n", "# trainer.extend(extensions.ProgressBar())\n", " \n", "trainer.run()\n" ], "execution_count": 12, "outputs": [ { "output_type": "stream", "text": [ "epoch main/loss validation/main/loss elapsed_time\n", "\u001b[J0 2.48903 67.7519 \n", "\u001b[J0 1.84639 117.127 \n", "\u001b[J0 1.89686 166.72 \n", "\u001b[J0 1.81704 215.449 \n", "\u001b[J1 1.85827 1.85512 274.106 \n", "\u001b[J1 1.81286 323.281 \n", "\u001b[J1 1.74802 372.488 \n", "\u001b[J1 1.80567 421.261 \n", "\u001b[J1 1.7467 470.755 \n", "\u001b[J2 1.70371 1.78047 528.83 \n", "\u001b[J2 1.77928 577.477 \n", "\u001b[J2 1.67051 626.814 \n", "\u001b[J2 1.6415 675.927 \n", "\u001b[J2 1.67238 725.017 \n", "\u001b[J3 1.69656 1.70897 782.987 \n", "\u001b[J3 1.63935 831.673 \n", "\u001b[J3 1.64996 881.092 \n", "\u001b[J3 1.63925 930.107 \n", "\u001b[J3 1.71683 979.111 \n", "\u001b[J4 1.63116 1.71748 1036.98 \n", "\u001b[J4 1.64786 1085.9 \n", "\u001b[J4 1.6442 1134.54 \n", "\u001b[J4 1.57821 1183.92 \n", "\u001b[J4 1.62886 1232.91 \n", "\u001b[J5 1.61523 1.66392 1290.8 \n", "\u001b[J5 1.65216 1339.78 \n", "\u001b[J5 1.61142 1388.37 \n", "\u001b[J5 1.61483 1437.71 \n", "\u001b[J5 1.57835 1486.61 \n", "\u001b[J6 1.56529 1.63406 1544.53 \n", "\u001b[J6 1.59062 1593.49 \n", "\u001b[J6 1.61102 1642.09 \n", "\u001b[J6 1.60003 1691.49 \n", "\u001b[J6 1.57222 1740.46 \n", "\u001b[J7 1.55098 1.62176 1798.31 \n", "\u001b[J7 1.54207 1847.28 \n", "\u001b[J7 1.5653 1895.92 \n", "\u001b[J7 1.57523 1944.68 \n", "\u001b[J7 1.61043 1993.73 \n", "\u001b[J8 1.57391 1.62377 2051.65 \n", "\u001b[J8 1.51835 2100.61 \n", "\u001b[J8 1.58225 2149.57 \n", "\u001b[J8 1.59289 2198.5 \n", "\u001b[J8 1.56643 2247.32 \n", "\u001b[J9 1.55151 1.62115 2305.72 \n", "\u001b[J9 1.53593 2354.7 \n", "\u001b[J9 1.57812 2403.76 \n", "\u001b[J9 1.54277 2452.85 \n", "\u001b[J9 1.55514 2501.51 \n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "mN4nDWQW3Dki" }, "source": [ "学習が成功したならば,ディレクトリのout以下に学習されたモデルが出力されているはずです.実際にモデルが出力されているのかを確認しましょう." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "hfT1yyTl3C9X", "outputId": "3e2fc02b-8924-4703-dd64-ae99fd04073e", "colab": { "base_uri": "https://localhost:8080/", "height": 233 } }, "source": [ "!ls -l out/" ], "execution_count": 13, "outputs": [ { "output_type": "stream", "text": [ "total 14172\n", "-rw-r--r-- 1 root root 10080 Dec 16 05:24 log\n", "-rw-r--r-- 1 root root 1445890 Dec 16 04:47 model_epoch_1\n", "-rw-r--r-- 1 root root 1447626 Dec 16 05:25 model_epoch_10\n", "-rw-r--r-- 1 root root 1446428 Dec 16 04:51 model_epoch_2\n", "-rw-r--r-- 1 root root 1446742 Dec 16 04:55 model_epoch_3\n", "-rw-r--r-- 1 root root 1447061 Dec 16 04:59 model_epoch_4\n", "-rw-r--r-- 1 root root 1447268 Dec 16 05:04 model_epoch_5\n", "-rw-r--r-- 1 root root 1447473 Dec 16 05:08 model_epoch_6\n", "-rw-r--r-- 1 root root 1447585 Dec 16 05:12 model_epoch_7\n", "-rw-r--r-- 1 root root 1447649 Dec 16 05:16 model_epoch_8\n", "-rw-r--r-- 1 root root 1447650 Dec 16 05:21 model_epoch_9\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "L4eQCDXG3L6e" }, "source": [ "次に,学習したモデルを用いてテストデータに対しても予測してみます.次のようにして学習が終わったモデルを読み込み,テストデータに対してモデルを適用してみましょう." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "UfJ7ZEQX3UQS", "outputId": "17722083-b2b4-4114-f4c0-138e7ba82987", "colab": { "base_uri": "https://localhost:8080/", "height": 89 } }, "source": [ "import chainer\n", "import chainer.links as L\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "\n", "model_n_epoch = 10\n", "out_dir = 'out'\n", "model = L.Classifier(Net())\n", "chainer.serializers.load_npz('{}/model_epoch_{}'.format(out_dir, model_n_epoch), model)\n", "predictor = model.predictor\n", "\n", "print(len(test_x))\n", "with chainer.no_backprop_mode():\n", " test_y_estimated = F.exp(predictor(test_x[:1]))\n", "\n", "test_y = test_y[:1]\n", "\n", "print(test_y_estimated.shape) \n", "print(test_y_estimated[0,:,0])\n", "\n" ], "execution_count": 14, "outputs": [ { "output_type": "stream", "text": [ "500\n", "(1, 1024, 10)\n", "variable([1.8674504 2.004048 1.68377 ... 0.81418294 0.7608197\n", " 0.8720923 ])\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "dlA0DLxY3atL" }, "source": [ "結果を抜粋して表示してみましょう.ここでは1つ目(i=0)の出力について正解と推定結果を出力しています.今回の場合でも,学習データを絞り(クラス数を10とした),学習回数も少ないですが,ピークを捉えられていることがわかると思います." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "nN4rkeuU7rjV", "outputId": "3e7300c6-abab-4729-bf04-3c56233a1288", "colab": { "base_uri": "https://localhost:8080/", "height": 630 } }, "source": [ "y = test_y_estimated.data\n", "fig_size = plt.rcParams[\"figure.figsize\"]\n", "fig_size[0] = 20\n", "fig_size[1] = 10\n", "i = 0\n", "b1 = plt.bar(range(y.shape[1]), y[0,:,i])\n", "b2 = plt.bar(range(y.shape[1]), test_y[0,:,i])\n", "plt.legend((b1, b2), ('estimated', 'observed'))\n" ], "execution_count": 15, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "" ] }, "metadata": { "tags": [] }, "execution_count": 16 }, { "output_type": "display_data", "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAABIIAAAI/CAYAAAALEXL9AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzt3XmYZGV9L/BvzwwQwNEM0FGCEUXM\nq4QYl6g3rsgScYveQLbrmoEIZlTAyNyogKKiPJhESCARn5gEiUvMKl4MGJyIaNSgKGjUN0FQE0dk\nlIFnAB2ZYe4f3Y1Nr9VV1V3V/X4+zzPPVJ86dc57lt9ZvnXOqZFdu3YFAAAAgJVv1aAbAAAAAMDS\nEAQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0Ys0gR75ly7YV89v169btla1b7xx0\nM2CoqAuYmdqA6dQFTKcuYDp10ZnR0bUjs73niqA+WbNm9aCbAENHXcDM1AZMpy5gOnUB06mL3nV0\nRVAp5dAkH0ryjlrr+aWUn0lycZLVSb6T5EW11u2llBckOTnJ3UneVWt99yK1GwAAAIAFmveKoFLK\n3kn+JMnHJnV+U5ILaq1PSXJ9kvXj/Z2R5MgkhyU5pZSyT99bDAAAAEBXOrk1bHuSZyXZPKnbYUku\nGX/94YyFP09IcnWt9bZa6w+SfCrJk/rXVAAAAAB6Me+tYbXWHUl2lFImd9671rp9/PXNSfZP8oAk\nWyb1M9EdAAAAgCHQj18Nm+1J1LM+oXrCunV7ragHPY2Orh10E2DoqAuYmdqA6dQFTKcuYDp10Ztu\ng6DbSyl7jt8CdkDGbhvbnLGrgiYckOQzcw1kJf3k2+jo2mzZsm3QzYChoi5gZmoDplMXMJ26gOnU\nRWfmCsu6/fn4K5IcM/76mCSXJflskseVUn6ylHKfjD0f6Kouhw8AAABAn3Xyq2GPLaV8PMlLk5w0\n/vrMJC8ppVyVZJ8kF41fHfT7SS7PWFB0Zq31tkVqNwAAAMCSuv76/8q3vvXNJMkb3vDabN/+w66H\n9cUvXpOtW2/pqN8777wzxx773K7HNVknD4v+fMZ+JWyqo2bo9++S/F3vzQIAAABatf7sTX0d3l/8\n/uF9Gc6VV27Kwx9+SB70oANz5plv62lYl156SX7rt16Ydev26UvbOtWPh0UDAAAALFs7d+7MOeec\nlc2bv50dO3bk+ONPzM03fzf/8A8fzJo1u+Xgg382z3/+MfnQh/4hV165KevWrcsZZ7w273nP3+Qd\n7zgn69atS61fy623bs0LXvCSXHrph3Pbbbfm/PPflZGR5MwzT8sPfvCD/PCHP8wpp5yaO+64PVdd\n9fHceOMNectbzkmtX8kHPvDXWb16TUp5RF75ylNyxx235/Wv35gf/ehHeeQjH9W3aRUEAQAAAE37\nl3+5LPvuu19e+9ozcuutt+akk05Mkpxzzrm5//0fkEsvvSQPfOAD84Qn/FIOO+yIHHLIoff6/OrV\na3LeeX+WM888LV/60nU577w/zZvffHquueZzefCDH5LnPOf5eepTD8vnP3913vvei3LWWW/PwQf/\nbF796o25733vm4suenfe+c6/zO67757TT//9XHfdF3P99f+Vgw56aF71qt/Lxz720VxxxeV9mVZB\nEAAAANC0L3/5ulx77Rdy3XVfTJJs3749v/zLz8zrXndqnvGMZ+bII5+RPfb4iVk//4hH/FySZN99\n98uBBz44SbJu3b65447bs88+++aii/4873//xbnrrrvyEz9x7+HceOMN+e53b8qrX/2KJMkdd9ye\nm266Kd/4xg151KMemyR59KMf27dpFQQBAAAATVuzZre8+MXrc9RRR9+r+7Oe9Sv5+MevyKte9fJc\ncMG7Zv386tWrZ3y9a9eufPCD78t++/1UTj/9zfna176S888/916f3W23sdvB/uiPzr9X9y996dqs\nWjWSJLn77l1dT9tU3f58PAAAAMCKcMghh+aTn7wySbJ16y258MILcuGFF2S//fbLb/7mC3PooT+f\nm266KSMjI9m5c+eChn3bbbfmgAMemCS58sp/zY4dO5Ikq1atys6dO/OgBz043/jGjff8gti7331h\ntmy5OQ960IH52te+miS55prP9WtSBUEAAABA2w4//MjsuedeOfHE9dm48ZQ88pGPyl577Z0TTvjt\nnHTSyzMyMpKHPexn8wu/8Oice+7b87nP/XvHwz766Gfnb/7mvTnllA35uZ87NN///vdz6aWX5FGP\nekxOO+3/ZvPmb+ekk34vr3nNSXn5y9fntttuzX77jeboo5+d//iPL+Wkk16e//7vb2ZkZKQv0zqy\na1f/Li9aqC1btg1u5H02Oro2W7ZsG3QzYKioC5iZ2oDp1AVMpy5gOnXRmdHRtbOmRq4IAgAAAGiE\nIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACAKY499rm58847B9qG888/\nNx/5yIf7Osw1fR0aAAAAQI82bNrY1+FdcPg5fR3eciYIApbEhk0bbXwBAIChtGPHjpxzzlnZvPnb\n+dGPfpTjjz8xSXLxxX+Za6/9QlavXp23vvUPcscdd+TNbz49q1atys6dO3PGGW/O6OhP3fPZHTt2\n5PjjT8xjH/u4vOIVL8tBBz00d999dz796U/lfe/7++yxxx75whc+n7/92w/ktNPemLe+9cxs27Yt\nO3fuzMknn5qDD35YLr/8I3nvey/K6Oj9s8cee+Sggx7a12kVBAEAAABN+5d/uSy77757zj//Xfne\n97bkFa84IUny0IcenBNO2JDzzz83l19+aXbs2JHHPe4JeelLj0+tX8v3vve9fPGL12TffffLa197\nRm699dacdNKJueiiDyRJDjrooXn+84/N2972pnz+81fniU98cj75yStz2GFH5IMffH+e8IQn5rnP\nfX5uvPGGnHfeH+Qd77ggF154Qd797ouzdu19c9xxL+z7tAqCAAAAgKbV+tU8+tGPTZLst99odt99\nt9xyy/fzmMf8YpLkEY/4uVx77TV5/vOPyeted2q2bduWpz/9iBx66CNz2WX/L9de+4Vcd90XkyTb\nt2/PXXfdNf65Q5MkT3va4fnUpz6RJz7xyfnsZz+T4447Iaef/trceuvWXH75R8Y/98Pcdttt2Wuv\nvbNu3T5Jkp//+V/o+7QKggAAAIDGjWTXrl33/HXXXXdl1aqRjIyM/LiPkZEcdNDB+au/en/+/d8/\nk3e+8/w8+9m/kjVrdsuLX7w+Rx119LSh7rbbWOzyi7/4+Pzpn56Xr3/9+hxwwAHZa6+9s9tua3LK\nKafm0EMfeU//W7duzapVPx7n3Xff3fcp9athAAAAQNMe8YhDcs01n0uSfPe7N2XVqlW5z33W5tpr\nv5Ak+cpXvpQDD3xIrrji8txww/V56lMPy+/8zu+m1q/mkEMOzSc/eWWSZOvWW3LhhRdMG/7uu++e\nhz70YXnf+96Tww47IklyyCGH5hOf+HiS5MYbb8gHPvDXud/97pfbb78927Zty44dO/KlL13b92l1\nRRAAAADQtCOO+OV84QufzytfeUJ27Lgrp576urzlLW/IjTfekH/8x79Pkqxf/7L8z//8T/7gD96a\nPffcK6tWrcrJJ5+aBz7wZ3LNNVfnxBPXZ+fOnVm//mUzjuNpTzs8Z531hpx88qlJkmOP/Y2cddYb\n87u/e3zuvvvunHzya7Jq1aqsX/+yvOIVL8v+++/f9wdFJ8nI5EufltqWLdsGN/I+Gx1dmy1btg26\nGTBUJteFXw2DH7PPgOnUBUynLmA6ddGZ0dG1I7O959YwAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAA\nAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgA\nAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAI\nAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQg\nCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiE\nIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABo\nhCAIAAAAoBGCIGDRbdi0cdBNAAAAIIIgAAAAgGYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACA\nRgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAA\ngEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAA\nAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAA\nAACARgiCAAAAABohCAIAAABoxJpuPlRKuU+S9yRZl2SPJGcmuSnJnyXZleS6WuvL+9VIAAAAAHrX\n7RVBL01Sa61PT3JskvOSnJvkpFrrk5Lcr5TyzP40EQAAAIB+6DYI+l6Sfcdfr0tyS5KH1FqvHu/2\n4SRH9tg2AAAAAPqoqyCo1vqBJA8qpVyf5BNJXpNk66Rebk6yf+/NAwAAAKBfun1G0AuTfKvWenQp\n5ReS/GOS2yb1MtLJcNat2ytr1qzupglDaXR07aCbAENncl2oEfgx9QDTqQuYTl3AdOqiN10FQUme\nlOTyJKm1XltK2TPJbpPePyDJ5vkGsnXrnV2OfviMjq7Nli3bBt0MGCpT60KNwBj7DJhOXcB06gKm\nUxedmSss6/YZQdcneUKSlFIOTLItyVdLKU8ef/9Xk1zW5bABAAAAWATdXhF0YZK/KKVcOT6MEzP2\n8/EXllJWJflsrfWKPrURAAAAgD7oKgiqtd6e5NdneOspvTUHAAAAgMXS7a1hAAAAACwzgiAAAACA\nRgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAA\ngEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAA\nAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAA\nAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIg\nAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGC\nIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKAR\ngiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACg\nEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAA\noBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAA\nAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgA\nAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAI\nAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQg\nCAAAAKARgiAAAACARgiCAAAAABqxptsPllJekGRjkh1JzkhyXZKLk6xO8p0kL6q1bu9HIwEAAADo\nXVdXBJVS9k3yhiRPTvKcJM9L8qYkF9Ran5Lk+iTr+9VIAAAAAHrX7a1hRya5ota6rdb6nVrry5Ic\nluSS8fc/PN4PAAAAAEOi21vDHpxkr1LKJUnWJXljkr0n3Qp2c5L9e24dAAAAAH3TbRA0kmTfJP87\nyYFJ/nW82+T357Vu3V5Zs2Z1l00YPqOjawfdBBg6k+tCjcCPqQeYTl3AdOoCplMXvek2CPpukn+r\nte5I8vVSyrYkO0ope9Zaf5DkgCSb5xvI1q13djn64TM6ujZbtmwbdDNgqEytCzUCY+wzYDp1AdOp\nC5hOXXRmrrCs22cEfTTJ4aWUVeMPjr5PkiuSHDP+/jFJLuty2AAAAAAsgq6CoFrrt5P8XZLPJPnn\nJK/M2K+IvaSUclWSfZJc1K9GAgAAANC7bm8NS631wiQXTul8VG/NAQAAAGCxdHtrGAAAAADLjCAI\nAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQg\nCAAAAKARgiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiE\nIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACARgiCAAAAABohCAIAAABo\nhCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggBowoZNGwfdBAAA\nGDhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAA\nANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQA\nAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAE\nAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQ\nBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRC\nEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0\nQhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAA\nNEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAdCM\nDZs2DroJAAAwUIIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARa3r5cCllzyRfTvLmJB9LcnGS\n1Um+k+RFtdbtPbcQAAAAgL7o9Yqg05LcMv76TUkuqLU+Jcn1Sdb3OGwAAAAA+qjrIKiU8vAkhyS5\ndLzTYUkuGX/94SRH9tQyAAAAAPqqlyuC/jDJqyf9vfekW8FuTrJ/D8MGAAAAoM+6ekZQKeXFST5d\na72xlDJTLyOdDGfdur2yZs3qbpowlEZH1w66CTB0JteFGmEYDMt6OCztgGGiLmA6dQHTqYvedPuw\n6GcnOaiU8pwkD0yyPcntpZQ9a60/SHJAks3zDWTr1ju7HP3wGR1dmy1btg26GTBUptaFGmEYDMN6\naJ8B06kLmE5dwHTqojNzhWVdBUG11t+YeF1KeWOSbyR5YpJjkvz1+P+XdTNsAAAAABZHr78aNtkb\nkryklHJVkn2SXNTHYQMAAADQo25vDbtHrfWNk/48qtfhAQAAALA4+nlFEAAAAABDTBAEAAAA0AhB\nEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAI\nQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEHAktmwaeOgmwAAANA0QRAAAABA\nIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAA\nQCMEQQAAAACNEAQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAA\nAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAIQRAA\nAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQ\nAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhB\nEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAI\nQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQ\nCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA\n0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQBAAA\nANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQA\nAADQCEEQAAAAQCMEQQAAAACNWNPtB0sp5yR5yvgw3pbk6iQXJ1md5DtJXlRr3d6PRgIAAADQu66u\nCCqlPD3JobXWX0pydJJzk7wpyQW11qckuT7J+r61EgAAAICedXtr2CeS/Nr461uT7J3ksCSXjHf7\ncJIje2oZAAAAAH3V1a1htdadSe4Y//O4JB9J8oxJt4LdnGT/3psHAAAAQL90/YygJCmlPC9jQdAv\nJ/mvSW+NdPL5dev2ypo1q3tpwlAZHV076CbA0JlaF+qEQRuWdXBY2gHDRF3AdOoCplMXvenlYdHP\nSPL6JEfXWm8rpdxeStmz1vqDJAck2TzfMLZuvbPb0Q+d0dG12bJl26CbAUNlprpQJwzaMKyD9hkw\nnbqA6dQFTKcuOjNXWNbtw6Lvl+TtSZ5Ta71lvPMVSY4Zf31Mksu6GTYAAAAAi6PbK4J+I8l+ST5Y\nSpno9pIkf15KOSHJN5Nc1HvzAAAAAOiXbh8W/a4k75rhraN6aw4AAAAAi6Xbn48HAAAAYJkRBAEA\nAAA0QhAEAB3YsGnjoJsAAAA9EwQBAAAANEIQBAAAANAIQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAE\nAAAA0AhBEAAAAEAjBEEAAAAAjRAEAQAAADRCEAQAAADQCEEQAAAAQCMEQQAAAACNEAQBAAAANEIQ\nBAAAANAIQRAAAABAIwRBAAAAAI0QBAHAPDZs2jjoJgAAQF8IggAAAAAaIQgCAAAAaIQgCAAAAKAR\ngiAAAACARgiCAAAAABohCAIAAABohCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACg\nEYIgAOijDZs2DroJAAAwK0EQAAAAQCMEQcCicnUEAADA8BAEAQAAADRCEAQAAADQCEEQAAAAQCME\nQQAAAACNEAQBAAAANEIQBAAAANAIQRAAzdmwaeOgmwAAAAMhCAIAAABohCAIAAAAoBGCIAAAAIBG\nCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEYIggAAAAAaIQgCAAAAaIQgCAAAAKARgiAAAACA\nRgiCAAAAABohCAIAAABohCAIAJbAhk0bB90EAAAQBAEAAAC0QhAEAAAA0AhBEAAAAEAjBEEAAAAA\njRAEAdAUD20GAKBlgiBgSa0/e9OgmwAAANAsQRAAAABAIwRBAAAAAI0QBAEAAAA0QhAEAAAA0AhB\nEAAAAEAjBEEAACwbfn0SAHojCAIAAABohCAIgCZt2LRx0E0AAIAlJwgCgD4RLgEAMOwEQQB9JAgA\nAACGmSAIADok6AMAYLkTBAEAAAA0QhAEAHNwFRAAACuJIAhgQAQMAN1Zf/amQTcBAJYtQRAAAABA\nIwRBAAAAAI0QBAHAOLfrAQCw0gmCAGAKgRAAACuVIAhYNM/9vQ8tynCdpAMAAHRHEAQAmR4wChwB\nAFiJBEEAAAw1PxfPcmJ9ZRj5govJBEHAsrBh00Y7MJZcJ+vcXP1YZ6G/nGADQO8EQTBEnDQCAACw\nmARBwMAJwAAAAJaGIGgFclLNcmFdZTnq9XYxoDtuCwNWstaPHVqf/qUmCIIBWy4bvX61c8/HX3av\nvx3YM0jLpf4AYLHYFw6G+c4gCYIaZKPDUlh/9qZpoQ+DpfYX17DN32FrD8PBegH0gy/yGEb2cZ0T\nBLHoOi3IhRTuYvXL0hMW9WZQ67e6olXW/f4Z1nk5rO0CgH4RBNEzB0xLpx/zup/DmG1YrX1LtByn\nd1jqdljaMdmwBM3DtF4N43Jaqcxr6A+1BDA7QRDT2HHe26BPxnoZf6/L0rrQm2Gdf8ParpXK/IaV\nbb4a37Bp4z3/WmcetM3y74/lPh+Xe/tXCkHQCjZMRTboMKVTwzTPhskwzZeZ2rJc1q/FNkzLiXsv\nj26WzVLcNtlJu9QXw2pQ62YvtW07DYO3EupwMaah1+MWlhdBUB8tp4JZqoOnfj0faLGvbGl1w7f+\n7E2zrguzzYdO589cJ7EzjXNyt5WwDJbrNHRzUrNcp7WfluM86DUEmu8W0eVmpUzHsOtkmzF1vevn\nMYvl3JuVOP+6naZhnBdL1SZfELBYhrGuVipBUJ889/c+dM/rYdg4LnURdRKkdDNfBjkvhzUcmjpP\nuglsFnN6ehn2bMt7rvVg0L9OttTraD9D0fnma0uGqcYXQ6fbDQZr6nJZScvJOrh8dHLl7VItv6Wq\niZmG2+24Wth/9nLc1csyXOw9YivXAAAMjUlEQVT1ruXtUsvT3ipBUJ8N8qR0rqs7lqOFTEsnG69e\nr3BZKrNdLdPp/FjIyf3EutprSDfxupd5ObVuJg+/k5qaqT2LZaYrIdafvemev+cKtDq5Gmqh6/N8\n60c/5kc3B8nDVlvDYGKe/PrfvHzO9xf6Xqd6XReWyz5mvlqc3M98/S2WXk+sW66vxTjO6nQ/ttCr\naOldP+ftSruScT6DCuyGzbC3bzlwBfjKIwjqk7lOYpdD0UycxPbzYHjYbj8btmEvdPwzHfjOFn50\n8m1epychnQ63X/NqIevNTGHRhk0bh+on6Werraknqt1O92zvTV13ZvrM5Pk0X0C1kg26zmfSy7yf\nLRhcaG10cntOJ4HLYurXFXKdzJfFvDKh3/NvtgP2pVpOCwnmZ1vPJtbj+T4/375rruXUaVA4uT0L\nNVdwNHU+dbJfXspamxqadDvufh8nTBiG8LTb9Wsp27GYn+10WIPaz06tq6nbxkHu//t5B8VCxreQ\n7crEMWw/vvjrdwg78YXasCzP5UgQtIgW8x73Tg3DN8hzDa+TjWCn83E5F/9C5/FMB4x7Pv6yrubB\n1BOgfszHhV7B042Zrr5byquCllIn0zJTqLPQK4tmOtjv5EqnyZ+fWDcXegKzkpbXfCbPz8kHWTMt\nt6nd5jtx7GY+zreeTKxbez7+sllDxaXWyQl8N1dXTg0hOtHNNnOh26p+LefJV4EuxXLsNOieaM/k\n2+w7MdsXJLONd2rtzdf/XOPqJaSaOpy5/u70vWHT6byZr17n+3wn458p1JtrGFOXbac1OszLp19t\nm21ezvReJ+3pZvvZ7W18w3SeMNd8m21b02lY3Etb5uo2m26W+2IY5vobVoKgPpvtBHjiAGzQG6GZ\ndmxznbDN1H8nO7uZQorJ45rrwKvXoGeu9k3dkA5TcNDLujG57f0IdhY6L7q5AqfTsGghbVnKK4Em\n2tXPcc50kj/TQXI3V3Us9OHdM+l2Wuers05PFodJP9o6Eaws9DOznaTMFXx0cvXX5EBqoduNQVyF\n1+mJxHwHuVNra6Y67GR/NfXAfr51ZGJcCwkzZrqKZFjNdBzQSXv7sZ2Z6b1u59VCPzfb8c9c/XQT\nBg6TmU5OJ455ZwuxZ/p8N8FPr1dNLfQkduq+eer4ej2pXqi5tkEzXYExW6Aw9TMzDXOubrP1M3lc\nnQx7JpO/ZJr82anTN1N/y9ls0zLXOc5sf3f6mQmdHDPOdSzRiW7Wh4Uc/zA/QdAiG6YVc6Eb804K\nbaYD5vn0ckI608HGbH/PtzGa62Bt0Mttrnk5+dv5qd3mGt5C+hukTk8CBn37V7fjn+lkaCIQ6Hb5\nTF0XFnPeLPTAZPIB2tRnUs0Ubg3DOtippWrrbNuxQW6zlmp8c520zNSe2QLTfre30xPZqScm3ewn\np36RMvX9Ttq5nOpqsoVsy+YLmrrZLnYS5s3Uf6/HQ3OFj4tprhPHyfuWmaazm/Z2G7LNVH+zbSsW\n0q5ePz9XmxfLTOt9N0HBfP3PtU53E2x2+vmFnhdM7j7fse/U9XhqkLaYy26+KwwXWkezbXsm/l7I\nFY0LuZJxtjbPND/nG9Zs3TqpwUGfEyxXfQ+CSinvKKV8upTyb6WUx/V7+Mxtvm8Wp74/X+FM3vHP\ndRtOJ8Na6Hjn04/boHrtbyn0qy3dLp/ZPrdU86gf68pS6CUY6ra/fh6k9DofOw0mJ7Ypsz2vZliW\n5yAsxpV13czjYV0GE+H+bPuz+ephIc8DmtrfQraDM7Vj8slav8Pa+W7dHEStdbO/me34ZDFuNe50\neU4NPhbyBUw/DSoMShZ2rNWPZ/RNHLP2egX9Qo93F9LvsG4j59Pp9mrq/J8rUOmkljrpZ2oY0+k6\nP1ubelnewxCc97L+JgsPsftZH1PbMdsXNBPvT/5/ts9OhFkzfTFC9/oaBJVSnpbkYbXWX0pyXJI/\n7ufwGX5LsXPs1w56ue7Ik8Vr+6DnyaDHv5h6mbZBzZelHu9yWv6L3dZhOREZdBA827gWsp3vZ1sX\nI3jrtJ+5PrfQK1GW2mKsL4MKZZbqGKPXE8Fex9nv8XUbIszUfZD7imE+xlzM+dPNMHs9WZ8aICzF\ntrQfwxrk+Lup4cWeZ4u53OhNv68IOiLJPyVJrfWrSdaVUu7b53EsC8Ows5psvg3oQts5MbzFSpCX\nyjC2aakM+7QPe/uGQb/m0VKeKC/2+JfSMATfU/sdtn1Pr5Yi9O536DYsJ2FLObx+G/b2zabb+ltu\n0zvoIHq2fldS4LgQvrTpziDC1rksxy955wpt+xGGD3qZrHT9DoIekGTLpL+3jHdr0jCvvLMV1yBO\nLId5PiXD/S1cv9L+lXgQMSzfGPZz2S5VyNLvA99hr/HlZqmChmFabsP2LfxMJxDDECgN04HzIK+I\n6LR7r8Ptx/Dmq71hDiK6rct+fjHZzWeGMcgaJgtdrotZX8O0HRmEmbbpC9nO9TpvF2ubuhjDHKbl\ntlyM7Nq1q28DK6W8K8mltdYPjf/9ySTra63/2beRAAAAANCVfl8RtDn3vgLop5N8p8/jAAAAAKAL\n/Q6CPprk2CQppTwmyeZa67Y+jwMAAACALvT11rAkKaWcneSpSe5OsqHWem1fRwAAAABAV/oeBAEA\nAAAwnPp9axgAAAAAQ0oQBAAAANCINYNuwEpQSnlHkv+VZFeSk2qtVw+4SbCkSinnJHlKxrYpb0ty\ndZKLk6zO2C8HvqjWur2U8oIkJ2fsGWLvqrW+e0BNhkVXStkzyZeTvDnJx6ImIOPr/MYkO5KckeS6\nqA0aVkq5T5L3JFmXZI8kZya5KcmfZezc4rpa68vH+z01ya+Ndz+z1vqRgTQaFlEp5dAkH0ryjlrr\n+aWUn0mH+4lSym5J/irJgUl2JvntWusNg5iOYeeKoB6VUp6W5GG11l9KclySPx5wk2BJlVKenuTQ\n8Ro4Osm5Sd6U5IJa61OSXJ9kfSll74wd9B+Z5LAkp5RS9hlMq2FJnJbklvHXaoLmlVL2TfKGJE9O\n8pwkz4vagJcmqbXWp2fs15fPy9ix1Em11icluV8p5ZmllIck+c38uH7+qJSyekBthkUxvv3/k4x9\ngTZhIfuJ/5Pk1lrrk5OclbEvqJmBIKh3RyT5pySptX41ybpSyn0H2yRYUp/I2LdTSXJrkr0ztkG+\nZLzbhzO2kX5CkqtrrbfVWn+Q5FNJnrS0TYWlUUp5eJJDklw63umwqAk4MskVtdZttdbv1FpfFrUB\n30uy7/jrdRn7AuEhk+4wmKiLpyf551rrj2qtW5J8M2P7GVhJtid5VpLNk7odls73E0ck+cfxfq+I\nfcesBEG9e0CSLZP+3jLeDZpQa91Za71j/M/jknwkyd611u3j3W5Osn+m18pEd1iJ/jDJqyf9rSYg\neXCSvUopl5RSriqlHBG1QeNqrR9I8qBSyvUZ+3LtNUm2TupFXdCMWuuO8WBnsoXsJ+7pXmu9O8mu\nUsrui9vq5UkQ1H8jg24ADEIp5XkZC4JeMeWt2WpCrbAilVJenOTTtdYbZ+lFTdCqkYxd+fCrGbsd\n5i9z7/VebdCcUsoLk3yr1npwksOT/PWUXtQF/NhC60GdzEIQ1LvNufcVQD+dsYdYQTNKKc9I8vok\nz6y13pbk9vEH5SbJARmrk6m1MtEdVppnJ3leKeUzSY5PcnrUBCTJd5P82/g3vl9Psi3JNrVB456U\n5PIkqbVem2TPJPtNel9d0LqFHEPd0338wdEjtdYfLWFblw1BUO8+mrEHu6WU8pgkm2ut2wbbJFg6\npZT7JXl7kufUWicejHtFkmPGXx+T5LIkn03yuFLKT47/QsaTkly11O2FxVZr/Y1a6+Nqrf8ryZ9n\n7FfD1ASMHTMdXkpZNf7g6PtEbcD1GXveSUopB2YsIP1qKeXJ4+//asbqYlOSZ5dSdi+l/HTGTny/\nMoD2wlJbyH7io/nxs0ufm+Rfl7ity8bIrl27Bt2GZa+UcnaSp2bsp+s2jKf50IRSysuSvDHJf07q\n/JKMnQD/RMYeZvjbtda7SinHJjk1Yz97+ie11vcucXNhSZVS3pjkGxn7tvc9URM0rpRyQsZuI06S\ntyS5OmqDho2fxP5FkvsnWZOxq0hvSnJhxr60/2yt9dXj/b4yyQsyVhen1Vo/NuNAYZkqpTw2Y89Z\nfHCSu5J8O2Pr/F+lg/3E+C/p/XmSh2XswdMvrbX+91JPx3IgCAIAAABohFvDAAAAABohCAIAAABo\nhCAIAAAAoBGCIAAAAIBGCIIAAAAAGiEIAgAAAGiEIAgAAACgEYIgAAAAgEb8f+8iEN5CYm8SAAAA\nAElFTkSuQmCC\n", "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "pmaz68ZrsFOb" }, "source": [ "時間に余裕があれば学習のn_epochsを10から30~50程度に増やしたり,層数を増やしたり,チャンネル数を増やしたりして,より高精度なモデルが学習できるのかを調べてみましょう.\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "7ThNVkbDGWrN" }, "source": [ "\n", "\n", "* [1] \"Sequential regulatory activity prediction across chromosomes with convolutional neural networks\", D. R. Kelly and et al., Genome Res. 2018. 28: 739-750\n", "* [2] \"Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks\", T. Salimans and et al., arXiv:1602.07868\n", "* [3] \"Language Modeling with Gated Convolutional Networks\", Y. N. Dauphin and et al., arXiv:1612.08083\n", "* [4] \"Densely Connected Convolutional Networks\", G. Huang, and et al., CVPR 2017" ] } ] }