{"id":3803,"date":"2023-06-01T20:25:59","date_gmt":"2023-06-01T13:25:59","guid":{"rendered":"https:\/\/unnes.ac.id\/ictcenter\/?p=3803"},"modified":"2025-11-11T16:42:13","modified_gmt":"2025-11-11T09:42:13","slug":"panduan-penggunaan-ai-server-nvidia-tesla-a100","status":"publish","type":"post","link":"https:\/\/unnes.ac.id\/ictcenter\/id\/2023\/06\/01\/panduan-penggunaan-ai-server-nvidia-tesla-a100\/","title":{"rendered":"Panduan Penggunaan AI Server + NVIDIA Tesla A100"},"content":{"rendered":"\n<p>Universitas Negeri Semarang saat ini memiliki High-performance Computing yang ditenagai oleh GPU Processor generasi terbaru, NVIDIA DGX Tesla A100. Berikut adalah panduan penggunaan AI Server dan NVIDIA Tesla A100 untuk civitas akademika Universitas Negeri Semarang.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"400\" height=\"244\" src=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2023\/06\/ai-server-unnes-2.png\" alt=\"\" class=\"wp-image-3819\" srcset=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2023\/06\/ai-server-unnes-2.png 400w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2023\/06\/ai-server-unnes-2-300x183.png 300w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Batasan<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pengguna fasilitas AI Server + NVIDIA Tesla A100 adalah dosen, staff, mahasiswa Universitas Negeri Semarang dan peneliti ataupun masyarakat umum<\/li>\n\n\n\n<li>Penggunaan fasilitas tunduk pada peraturan penggunaan peralatan dan standar tarif layanan Universitas Negeri Semarang, jika dibutuhkan dan dipergunakan untuk kepentingan komersial<\/li>\n\n\n\n<li>Penggunaan fasilitas dibatasi menurut kebutuhan komputasi, review atas rencana penggunaan\/flowchart aplikasi, dan ketersediaan server\/gpu<\/li>\n\n\n\n<li>Penggunaan fasilitas tunduk pada UU yang berlaku di Indonesia, khususnya UU tentang Informatika dan Transaksi Elektronik, UU tentang Sistem Nasional Ilmu Pengetahuan dan Teknologi, UU tentang Perlindungan Data Pribadi, UU tentang Hak Cipta, UU tentang Pornografi.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Spesifikasi<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>3 buah xFusion Server 2288H v5 @ 40 Core CPU<\/li>\n\n\n\n<li>6 node NVIDIA Tesla A100 80GB PCIe, with 9.7 Teraflops FP64, 80GB HBM2e Memory, 1,935 GB\/s Bandwith<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Pengajuan Akun<\/h2>\n\n\n\n<p>Pengajuan akun dan kerjasama penggunaan fasilitas AI Server ini dapat dilakukan menggunakan fasilitas layanan terpadu UNNES di ALT (<a href=\"https:\/\/unnes.ac.id\/helpdesk\" data-type=\"URL\" data-id=\"https:\/\/unnes.ac.id\/helpdesk\">unnes.ac.id\/helpdesk<\/a>) atau lewat aplikasi MyUNNES (Android\/iOS) dengan subjek &#8220;Permohonan Fasilitas Penelitian AI Server&#8221;.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Prosedur Penggunaan<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pengguna mendapatkan akun Portainer dan akun VPN untuk dapat mengakses AI Server dengan batasan yang ditetapkan oleh Subdit Sistem Informasi,<\/li>\n\n\n\n<li>Pengguna melakukan pembuatan container dengan kemampuan GPU menggunakan akun Portainer masing-masing sesuai dengan kebijakan dari Subdit Sistem Informasi,<\/li>\n\n\n\n<li>Pengguna melakukan koneksi ke server lewat jaringan VPN dan login menggunakan akun SSH  untuk melakukan proses unggah\/unduh file artefak\/video\/model atau file lainnya jika dibutuhkan,<\/li>\n\n\n\n<li>Pengguna melakukan eksekusi program\/aplikasi dalam lingkungan terbatas di Docker Container yang ditentukan dan disediakan oleh Subdit Sistem Informasi atau image yang sudah diperiksa dan disetujui penggunaannya oleh Subdit Sistem Informasi,<\/li>\n\n\n\n<li>Pengguna mengunduh hasil program\/aplikasi yang dieksekusi oleh Docker Container sesuai kebutuhan<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Docker Image yang Telah Disediakan<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>NVIDIA Cuda <em>nvidia\/cuda:12.0.1-base-ubuntu22.04<\/em><\/li>\n\n\n\n<li>TensorFlow GPU <em>tensorflow\/tensorflow:latest-gpu<\/em><\/li>\n\n\n\n<li>Ubuntu 22.04 with Python 3: <em>infraunnes\/ai-server:ubuntu22.04-python3<\/em><\/li>\n\n\n\n<li>TensorFlow Ubuntu 22.04 Python 3: <em>infraunnes\/ai-server:ubuntu22.04-python3-tensorflow<\/em><\/li>\n\n\n\n<li>OpenCV Ubuntu 22.04 Python 3: <em>infraunnes\/ai-server:ubuntu22.04-python3-tensorflow<\/em><\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Contoh Penggunaan AI Server<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Mengetahui Driver GPU yang terpasang di Server<\/h3>\n\n\n\n<p>Jalankan docker container dengan mengeksekusi perintah berikut:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker run --rm --gpus all --name namauser-nvidia-smi-test nvidia\/cuda:12.6.3-cudnn-runtime-ubuntu22.04 nvidia-smi<\/code><\/pre>\n\n\n\n<p>keterangan: perintah diatas akan menggunakan semua GPU yang ada sebagai referensi, dan dieksekusi dalam environment image bawaan dari NVIDIA CUDA yaitu <strong>cuda:12.6.3-cudnn-runtime-ubuntu22.04<\/strong>, dan aplikasi\/tool yang dieksekusi adalah <strong>nvidia-smi<\/strong>.<\/p>\n\n\n\n<p>contoh hasil eksekusi:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>+-----------------------------------------------------------------------------------------+\n| NVIDIA-SMI 560.28.03              Driver Version: 560.28.03      CUDA Version: 12.6     |\n|-----------------------------------------+------------------------+----------------------+\n| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |\n| Fan  Temp   Perf          Pwr:Usage\/Cap |           Memory-Usage | GPU-Util  Compute M. |\n|                                         |                        |               MIG M. |\n|=========================================+========================+======================|\n|   0  NVIDIA A100 80GB PCIe          Off |   00000000:3B:00.0 Off |                    0 |\n| N\/A   26C    P0             39W \/  300W |       4MiB \/  81920MiB |      0%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   1  NVIDIA A100 80GB PCIe          Off |   00000000:86:00.0 Off |                    0 |\n| N\/A   27C    P0             40W \/  300W |       4MiB \/  81920MiB |      0%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n\n+-----------------------------------------------------------------------------------------+\n| Processes:                                                                              |\n|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |\n|        ID   ID                                                               Usage      |\n|=========================================================================================|\n|  No running processes found                                                             |\n+-----------------------------------------------------------------------------------------+\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Contoh Menjalankan Code Python dengan kebutuhan Library TensorFlow<\/h3>\n\n\n\n<p>Upload source code python yang telah diuji di komputer lokal ke server menggunakan FileZilla ke folder masing-masing. Kemudian eksekusi dalam container seperti berikut:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker run -u $(id -u):$(id -g) --gpus all --name namauser-container -v \/home\/namauser\/nama_aplikasi.py:\/tmp\/nama_aplikasi.py -w \/tmp -it tensorflow\/tensorflow:latest-gpu python \/tmp\/nama_aplikasi.py<\/code><\/pre>\n\n\n\n<p>Keterangan:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>-u $(id -u):$(id -g): menjalankan sebagai user dan group yang sama saat login,<\/li>\n\n\n\n<li>&#8211;gpus all: menggunakan seluruh GPU yang terpasang di server,<\/li>\n\n\n\n<li>-v $PWD:\/tmp: mounting\/mengaitkan folder aktif (saat eksekusi aplikasi) ke folder \/tmp container<\/li>\n\n\n\n<li>-it tensorflow\/tensorflow:latest-gpu: menjalankan image tensorflow\/tensorflow:latest-gpu<\/li>\n\n\n\n<li>python nama_aplikasi.py: mengeksekusi file nama_aplikasi.py dengan program python<\/li>\n<\/ul>\n\n\n\n<p>Output yang dihasilkan sesuai dengan aplikasi nama_aplikasi.py<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Komputasi Sederhana dengan GPU<\/h3>\n\n\n\n<p>Silakan tulis sourcecode contoh berikut kemudian eksekusi dalam container TensorFlow:<\/p>\n\n\n\n<p>sourcecode:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from __future__ import print_function\n'''\nBasic Multi GPU computation example using TensorFlow library.\n\nAuthor: Aymeric Damien\nProject: https:\/\/github.com\/aymericdamien\/TensorFlow-Examples\/\n'''\n\n'''\nThis tutorial requires your machine to have 2 GPUs\n\"\/cpu:0\": The CPU of your machine.\n\"\/gpu:0\": The first GPU of your machine\n\"\/gpu:1\": The second GPU of your machine\n'''\n\n\n\nimport numpy as np\n#import tensorflow as tf\nimport tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\nimport datetime\n\n# Processing Units logs\nlog_device_placement = True\n\n# Num of multiplications to perform\nn = 10\n\n'''\nExample: compute A^n + B^n on 2 GPUs\nResults on 8 cores with 2 GTX-980:\n * Single GPU computation time: 0:00:11.277449\n * Multi GPU computation time: 0:00:07.131701\n'''\n# Create random large matrix\nA = np.random.rand(10000, 10000).astype('float32')\nB = np.random.rand(10000, 10000).astype('float32')\n\n# Create a graph to store results\nc1 = &#091;]\nc2 = &#091;]\n\ndef matpow(M, n):\n    if n &lt; 1: #Abstract cases where n &lt; 1\n        return M\n    else:\n        return tf.matmul(M, matpow(M, n-1))\n\n'''\nSingle GPU computing\n'''\nwith tf.device('\/gpu:0'):\n    a = tf.placeholder(tf.float32, &#091;10000, 10000])\n    b = tf.placeholder(tf.float32, &#091;10000, 10000])\n    # Compute A^n and B^n and store results in c1\n    c1.append(matpow(a, n))\n    c1.append(matpow(b, n))\n\nwith tf.device('\/cpu:0'):\n  sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n\n\nt1_1 = datetime.datetime.now()\nwith tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:\n    # Run the op.\n    sess.run(sum, {a:A, b:B})\nt2_1 = datetime.datetime.now()\n\n\n'''\nMulti GPU computing\n'''\n# GPU:0 computes A^n\nwith tf.device('\/gpu:0'):\n    # Compute A^n and store result in c2\n    a = tf.placeholder(tf.float32, &#091;10000, 10000])\n    c2.append(matpow(a, n))\n\n# GPU:1 computes B^n\nwith tf.device('\/gpu:1'):\n    # Compute B^n and store result in c2\n    b = tf.placeholder(tf.float32, &#091;10000, 10000])\n    c2.append(matpow(b, n))\n\nwith tf.device('\/cpu:0'):\n  sum = tf.add_n(c2) #Addition of all elements in c2, i.e. A^n + B^n\n\nt1_2 = datetime.datetime.now()\nwith tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:\n    # Run the op.\n    sess.run(sum, {a:A, b:B})\nt2_2 = datetime.datetime.now()\n\n\nprint(\"Single GPU computation time: \" + str(t2_1-t1_1))\nprint(\"Multi GPU computation time: \" + str(t2_2-t1_2))<\/code><\/pre>\n\n\n\n<p>kemudian eksekusi:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker run -d --rm --name namauser-container \\\n    -u $(id -u):$(id -g) \\\n    --gpus all \\\n    -v \/home\/namauser\/basic_computation.py:\/tmp\/basic_computation.py \\\n    -w \/tmp \\\n    -it tensorflow\/tensorflow:latest-gpu \\\n    python \/tmp\/basic_computation.py<\/code><\/pre>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-cb1b6396b66ccb5375b8bcb905c76d91\"><strong>NB. <em>Pastikan namauser diganti dengan username masing-masing, untuk mengidentifikasi container yang berjalan.<\/em><\/strong><\/p>\n\n\n\n<p>contoh hasil eksekusi script diatas:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Single GPU computation time: 0:00:03.229350<br>Multi GPU computation time: 0:00:01.781737<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Contoh Jupyter Notebook dan HTTP Endpoint<\/h2>\n\n\n\n<p>Buat sebuah folder baru di masing-masing akun. Contoh: \/home\/namauser\/<strong>namauser-jupyter-notebook<\/strong><\/p>\n\n\n\n<p>kemudian buat file dengan nama <em>docker-compose.yml <\/em>didalamnya:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>services:\n  jupyter-notebook:\n    image: jupyter\/tensorflow-notebook:latest\n    container_name: namauser-notebook \n    ports:\n      - \"40001:8888\"\n    volumes:\n      - \/home\/namauser\/namauser-jupyter-notebook\/jupyterdata:\/home\/jovyan\/work\n    deploy:\n      resources:\n        reservations:\n          devices:\n            - capabilities: &#091;gpu]\n    restart: \"no\"<\/code><\/pre>\n\n\n\n<p>dengan image docker <em>jupyter\/tensorflow-notebook:latest<\/em>, silakan diganti menyesuaikan kebutuhan. Dengan endpoint http port: <strong>40001<\/strong> (sesuai dengan port yang diberikan saat pelayanan).<\/p>\n\n\n\n<p>kemudian di console\/terminal pada folder tersebut, dan jalankan perintah <strong><em>docker compose up -d<\/em><\/strong>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd \/home\/namauser\/namauser-jupyter-notebook\nchmod 777 \/home\/namauser\/namauser-jupyter-notebook\/jupyterdata\ndocker compose up -d<\/code><\/pre>\n\n\n\n<p>dan login <em>http:\/\/10.2.16.101:40001\/<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Universitas Negeri Semarang saat ini memiliki High-performance Computing yang ditenagai oleh GPU Processor generasi terbaru, NVIDIA DGX Tesla A100. Berikut adalah panduan penggunaan AI Server dan NVIDIA Tesla A100 untuk civitas akademika Universitas Negeri Semarang. Batasan Spesifikasi Pengajuan Akun Pengajuan akun dan kerjasama penggunaan fasilitas AI Server ini dapat dilakukan menggunakan fasilitas layanan terpadu UNNES [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":3833,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[105,116],"tags":[],"class_list":["post-3803","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-kabar","category-programming-id"],"_links":{"self":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts\/3803","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/comments?post=3803"}],"version-history":[{"count":5,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts\/3803\/revisions"}],"predecessor-version":[{"id":4455,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts\/3803\/revisions\/4455"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/media\/3833"}],"wp:attachment":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/media?parent=3803"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/categories?post=3803"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/tags?post=3803"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}