{"id":4435,"date":"2025-09-29T21:12:36","date_gmt":"2025-09-29T14:12:36","guid":{"rendered":"https:\/\/unnes.ac.id\/ictcenter\/?p=4435"},"modified":"2025-10-01T09:10:19","modified_gmt":"2025-10-01T02:10:19","slug":"panduan-penggunaan-server-gpu-supermicro-amd-instinct-mi210","status":"publish","type":"post","link":"https:\/\/unnes.ac.id\/ictcenter\/id\/2025\/09\/29\/panduan-penggunaan-server-gpu-supermicro-amd-instinct-mi210\/","title":{"rendered":"Panduan Penggunaan Server GPU Supermicro AMD Instinct MI210"},"content":{"rendered":"\n<p class=\"has-black-color has-text-color has-link-color wp-elements-ecbbf31a4f173f47e1f32e211cff30dd\">Universitas Negeri Semarang mulai tahun 2025, menambah koleksi High-performance Computing (HPC) selain NVIDIA DGX Tesla A100, yaitu Supermicro AMD Instinct MI210. Berikut adalah panduan penggunaan server GPU <a href=\"https:\/\/copyprompt.id\/cara-benchmarking-gpu-amd-instinct-mi210-dengan-qwen3-14b.html\">Supermicro AMD Instinct MI210<\/a> untuk civitas akademika Universitas Negeri Semarang.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"720\" src=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/supermicro-amd-unnes.webp\" alt=\"\" class=\"wp-image-4431\" style=\"width:471px;height:auto\" srcset=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/supermicro-amd-unnes.webp 960w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/supermicro-amd-unnes-300x225.webp 300w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/supermicro-amd-unnes-768x576.webp 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Batasan<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pengguna fasilitas server GPU Supermicro AMD Instinct MI210 adalah dosen, staff, mahasiswa Universitas Negeri Semarang dan peneliti ataupun masyarakat umum<\/li>\n\n\n\n<li>Penggunaan fasilitas tunduk pada peraturan penggunaan peralatan dan standar tarif layanan Universitas Negeri Semarang, jika dibutuhkan dan dipergunakan untuk kepentingan komersial<\/li>\n\n\n\n<li>Penggunaan fasilitas dibatasi menurut kebutuhan komputasi, review atas rencana penggunaan\/flowchart aplikasi, dan ketersediaan server\/gpu<\/li>\n\n\n\n<li>Penggunaan fasilitas tunduk pada UU yang berlaku di Indonesia, khususnya UU tentang Informatika dan Transaksi Elektronik, UU tentang Sistem Nasional Ilmu Pengetahuan dan Teknologi, UU tentang Perlindungan Data Pribadi, UU tentang Hak Cipta, UU tentang Pornografi.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Spesifikasi<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>1 buah Supermicro AMD Epyc<\/li>\n\n\n\n<li>1 GPU AMD Instinct MI210 64GB VRAM<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Pengajuan Akun<\/h2>\n\n\n\n<p>Pengajuan akun dan kerjasama penggunaan fasilitas AI Server ini dapat dilakukan menggunakan fasilitas layanan terpadu UNNES secara online di <a href=\"https:\/\/helpdesk.unnes.ac.id\">https:\/\/helpdesk.unnes.ac.id<\/a> atau lewat aplikasi MyUNNES (Android\/iOS) dengan subjek &#8220;Permohonan Fasilitas Penelitian AI Server AMD&#8221;.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Prosedur Penggunaan<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pengguna mendapatkan akun Portainer dan akun VPN untuk dapat mengakses AI Server dengan batasan yang ditetapkan oleh Subdit Sistem Informasi,<\/li>\n\n\n\n<li>Pengguna melakukan pembuatan container dengan kemampuan GPU menggunakan akun masing-masing sesuai dengan kebijakan dari Subdit Sistem Informasi,<\/li>\n\n\n\n<li>Pengguna melakukan koneksi ke server lewat jaringan VPN dan login menggunakan akun SSH  untuk melakukan proses unggah\/unduh file artefak\/video\/model atau file lainnya jika dibutuhkan,<\/li>\n\n\n\n<li>Pengguna melakukan eksekusi program\/aplikasi dalam lingkungan terbatas di Docker Container yang ditentukan dan disediakan oleh Subdit Sistem Informasi atau image yang sudah diperiksa dan disetujui penggunaannya oleh Subdit Sistem Informasi,<\/li>\n\n\n\n<li>Pengguna mengunduh hasil program\/aplikasi yang dieksekusi oleh Docker Container sesuai kebutuhan<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Docker Image yang Telah Disediakan<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AMD ROCm Tensorflow <em>rocm\/tensorflow:latest<\/em><\/li>\n\n\n\n<li>AMD ROCm 7.0.x Terminal <em>rocm\/<\/em>rocm-terminal:latest<\/li>\n\n\n\n<li>AMD ROCM Ubuntu 24.04 with Python 3: <em>rocm\/rocm-build-ubuntu-24.04:6.4<\/em><\/li>\n\n\n\n<li>AMD ROCm with Pytorch: <em>rocm\/pytorch:latest<\/em><\/li>\n\n\n\n<li>AMD ROCm vLLM: <em>rocm\/vllm:latest<\/em><\/li>\n<\/ul>\n\n\n\n<p class=\"has-white-color has-vivid-red-background-color has-text-color has-background has-link-color wp-elements-45426796ed713e81126a0cbc89dbaec3\">Dikarenakan platform AMD Instinct ini menggunakan driver dan ekosistem software ROCm (bukan CUDA seperti GPU NVIDIA), maka ketersediaan tutorial dan container yang menggunakan ROCm ini masih belum terlalu banyak seperti CUDA NVIDIA.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Contoh Penggunaan Server GPU AMD Instinct MI210 UNNES<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Mengetahui Driver GPU yang terpasang di Server<\/h3>\n\n\n\n<p>Jalankan docker container dengan mengeksekusi perintah berikut:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker run -it \\\n    --network=host \\\n    --device=\/dev\/kfd \\\n    --device=\/dev\/dri \\\n    --ipc=host \\\n    --shm-size 16G \\\n    --group-add video \\\n    --cap-add=SYS_PTRACE \\\n    --security-opt seccomp=unconfined \\\n    rocm\/rocm-terminal:latest \\\n    rocm-smi<\/code><\/pre>\n\n\n\n<p>keterangan: perintah diatas akan menggunakan semua GPU yang ada sebagai referensi, dan dieksekusi dalam environment image bawaan dari AMD ROCm yaitu <strong>rocm\/rocm-terminal<\/strong>, dan aplikasi\/tool yang dieksekusi adalah <strong>rocm-smi<\/strong>.<\/p>\n\n\n\n<p>contoh hasil eksekusi:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"948\" height=\"165\" src=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/rocm-smi-example-unnes.png\" alt=\"\" class=\"wp-image-4423\" srcset=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/rocm-smi-example-unnes.png 948w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/rocm-smi-example-unnes-300x52.png 300w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/rocm-smi-example-unnes-768x134.png 768w\" sizes=\"auto, (max-width: 948px) 100vw, 948px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Contoh Menjalankan Code Python dengan kebutuhan Library TensorFlow<\/h3>\n\n\n\n<p>Upload source code python yang telah diuji di komputer lokal ke server menggunakan FileZilla ke folder masing-masing, contoh source code python dengan library tensorflow <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import tensorflow as tf\nprint(\"TensorFlow version:\", tf.__version__)\nmnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train \/ 255.0, x_test \/ 255.0\nmodel = tf.keras.models.Sequential(&#091;\n  tf.keras.layers.Flatten(input_shape=(28, 28)),\n  tf.keras.layers.Dense(128, activation='relu'),\n  tf.keras.layers.Dropout(0.2),\n  tf.keras.layers.Dense(10)\n])\npredictions = model(x_train&#091;:1]).numpy()\ntf.nn.softmax(predictions).numpy()\nloss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\nloss_fn(y_train&#091;:1], predictions).numpy()\nmodel.compile(optimizer='adam',\n              loss=loss_fn,\n              metrics=&#091;'accuracy'])\nmodel.fit(x_train, y_train, epochs=5)\nmodel.evaluate(x_test,  y_test, verbose=2)<\/code><\/pre>\n\n\n\n<p>Kemudian eksekusi dalam container seperti berikut:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker run -it \\\n    --network=host \\\n    --device=\/dev\/kfd \\\n    --device=\/dev\/dri \\\n    --ipc=host \\\n    --shm-size 16G \\\n    --group-add video \\\n    --cap-add=SYS_PTRACE \\\n    --security-opt seccomp=unconfined \\\n    -v $(pwd):\/workspace \\\n    rocm\/tensorflow:latest \\\n    python \/workspace\/tensorflow-example.py<\/code><\/pre>\n\n\n\n<p>Output yang dihasilkan sesuai dengan aplikasi tensorflow-example.py<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"328\" src=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-supermicro-rocm-tensorflow-unnes-1024x328.png\" alt=\"\" class=\"wp-image-4424\" srcset=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-supermicro-rocm-tensorflow-unnes-1024x328.png 1024w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-supermicro-rocm-tensorflow-unnes-300x96.png 300w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-supermicro-rocm-tensorflow-unnes-768x246.png 768w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-supermicro-rocm-tensorflow-unnes-1536x492.png 1536w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-supermicro-rocm-tensorflow-unnes.png 1696w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Komputasi Sederhana dengan Pytorch<\/h3>\n\n\n\n<p>Silakan tulis sourcecode untuk menghitung MNIST sederhana dengan contoh berikut kemudian eksekusi dalam container rocm-pytorch:<\/p>\n\n\n\n<p>sourcecode:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom torch.optim.lr_scheduler import StepLR\n\n\nclass Net(nn.Module):\n    def __init__(self):\n        super(Net, self).__init__()\n        self.conv1 = nn.Conv2d(1, 32, 3, 1)\n        self.conv2 = nn.Conv2d(32, 64, 3, 1)\n        self.dropout1 = nn.Dropout(0.25)\n        self.dropout2 = nn.Dropout(0.5)\n        self.fc1 = nn.Linear(9216, 128)\n        self.fc2 = nn.Linear(128, 10)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2)\n        x = self.dropout1(x)\n        x = torch.flatten(x, 1)\n        x = self.fc1(x)\n        x = F.relu(x)\n        x = self.dropout2(x)\n        x = self.fc2(x)\n        output = F.log_softmax(x, dim=1)\n        return output\n\n\ndef train(args, model, device, train_loader, optimizer, epoch):\n    model.train()\n    for batch_idx, (data, target) in enumerate(train_loader):\n        data, target = data.to(device), target.to(device)\n        optimizer.zero_grad()\n        output = model(data)\n        loss = F.nll_loss(output, target)\n        loss.backward()\n        optimizer.step()\n        if batch_idx % args.log_interval == 0:\n            print('Train Epoch: {} &#091;{}\/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n                epoch, batch_idx * len(data), len(train_loader.dataset),\n                100. * batch_idx \/ len(train_loader), loss.item()))\n            if args.dry_run:\n                break\n\n\ndef test(model, device, test_loader):\n    model.eval()\n    test_loss = 0\n    correct = 0\n    with torch.no_grad():\n        for data, target in test_loader:\n            data, target = data.to(device), target.to(device)\n            output = model(data)\n            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss\n            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n            correct += pred.eq(target.view_as(pred)).sum().item()\n\n    test_loss \/= len(test_loader.dataset)\n\n    print('\\nTest set: Average loss: {:.4f}, Accuracy: {}\/{} ({:.0f}%)\\n'.format(\n        test_loss, correct, len(test_loader.dataset),\n        100. * correct \/ len(test_loader.dataset)))\n\n\ndef main():\n    # Training settings\n    parser = argparse.ArgumentParser(description='PyTorch MNIST Example')\n    parser.add_argument('--batch-size', type=int, default=64, metavar='N',\n                        help='input batch size for training (default: 64)')\n    parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\n                        help='input batch size for testing (default: 1000)')\n    parser.add_argument('--epochs', type=int, default=14, metavar='N',\n                        help='number of epochs to train (default: 14)')\n    parser.add_argument('--lr', type=float, default=1.0, metavar='LR',\n                        help='learning rate (default: 1.0)')\n    parser.add_argument('--gamma', type=float, default=0.7, metavar='M',\n                        help='Learning rate step gamma (default: 0.7)')\n    parser.add_argument('--no-accel', action='store_true',\n                        help='disables accelerator')\n    parser.add_argument('--dry-run', action='store_true',\n                        help='quickly check a single pass')\n    parser.add_argument('--seed', type=int, default=1, metavar='S',\n                        help='random seed (default: 1)')\n    parser.add_argument('--log-interval', type=int, default=10, metavar='N',\n                        help='how many batches to wait before logging training status')\n    parser.add_argument('--save-model', action='store_true', \n                        help='For Saving the current Model')\n    args = parser.parse_args()\n\n    use_accel = not args.no_accel and torch.accelerator.is_available()\n\n    torch.manual_seed(args.seed)\n\n    if use_accel:\n        device = torch.accelerator.current_accelerator()\n    else:\n        device = torch.device(\"cpu\")\n\n    train_kwargs = {'batch_size': args.batch_size}\n    test_kwargs = {'batch_size': args.test_batch_size}\n    if use_accel:\n        accel_kwargs = {'num_workers': 1,\n                        'persistent_workers': True,\n                       'pin_memory': True,\n                       'shuffle': True}\n        train_kwargs.update(accel_kwargs)\n        test_kwargs.update(accel_kwargs)\n\n    transform=transforms.Compose(&#091;\n        transforms.ToTensor(),\n        transforms.Normalize((0.1307,), (0.3081,))\n        ])\n    dataset1 = datasets.MNIST('..\/data', train=True, download=True,\n                       transform=transform)\n    dataset2 = datasets.MNIST('..\/data', train=False,\n                       transform=transform)\n    train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)\n    test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)\n\n    model = Net().to(device)\n    optimizer = optim.Adadelta(model.parameters(), lr=args.lr)\n\n    scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)\n    for epoch in range(1, args.epochs + 1):\n        train(args, model, device, train_loader, optimizer, epoch)\n        test(model, device, test_loader)\n        scheduler.step()\n\n    if args.save_model:\n        torch.save(model.state_dict(), \"mnist_cnn.pt\")\n\n\nif __name__ == '__main__':\n    main()<\/code><\/pre>\n\n\n\n<p>kemudian eksekusi:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker run -it \\\n    --cap-add=SYS_PTRACE \\\n    --security-opt seccomp=unconfined \\\n    --device=\/dev\/kfd \\\n    --device=\/dev\/dri \\\n    --group-add video \\\n    --ipc=host \\\n    --shm-size 8G \\\n    -v $(pwd):\/workspace \\\n    rocm\/pytorch:latest \\\n    python \/workspace\/mnist.py\n<\/code><\/pre>\n\n\n\n<p>contoh hasil eksekusi script diatas:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"519\" height=\"391\" src=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-mnist-pytorch-rocm.png\" alt=\"\" class=\"wp-image-4425\" srcset=\"https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-mnist-pytorch-rocm.png 519w, https:\/\/unnes.ac.id\/ictcenter\/wp-content\/uploads\/sites\/2\/2025\/09\/contoh-mnist-pytorch-rocm-300x226.png 300w\" sizes=\"auto, (max-width: 519px) 100vw, 519px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Contoh Jupyter Notebook Tensorflow untuk AMD ROCm 7<\/h2>\n\n\n\n<p>Buat sebuah folder baru di masing-masing akun. Contoh: \/home\/namauser\/<strong>namauser-jupyter-notebook<\/strong><\/p>\n\n\n\n<p>kemudian buat file dengan nama <em>docker-compose.yml <\/em>didalamnya:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\nservices:\n  jupyter-notebook:\n    image: infraunnes\/rocm-jupyternotebook:tensorflow\n    devices:\n      - \/dev\/kfd\n      - \/dev\/dri\n    group_add:\n      - video\n    shm_size: 16G\n    security_opt:\n      - seccomp=unconfined\n      - apparmor=unconfined\n    cap_add:\n      - SYS_PTRACE\n    ports:\n      - \"40001:8888\"\n    volumes:\n      - .\/data:\/home\/jovyan\/work\n    restart: \"no\"\n<\/code><\/pre>\n\n\n\n<p>dengan image docker <em>infraunnes\/rocm-jupyternotebook:tensorflow<\/em>, silakan diganti menyesuaikan kebutuhan. Dengan endpoint http port: <strong>40001<\/strong> (sesuai dengan port yang diberikan saat pelayanan).<\/p>\n\n\n\n<p>kemudian di console\/terminal pada folder tersebut, dan jalankan perintah <strong><em>docker compose up -d<\/em><\/strong>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd \/home\/namauser\/namauser-jupyter-notebook\nchmod 777 \/home\/namauser\/namauser-jupyter-notebook\/jupyterdata\ndocker compose up -d<\/code><\/pre>\n\n\n\n<p>dan login <em>http:\/\/10.2.16.99:40001\/<\/em> atau sesuai dengan instruksi saat layanan helpdesk.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Universitas Negeri Semarang mulai tahun 2025, menambah koleksi High-performance Computing (HPC) selain NVIDIA DGX Tesla A100, yaitu Supermicro AMD Instinct MI210. Berikut adalah panduan penggunaan server GPU Supermicro AMD Instinct MI210 untuk civitas akademika Universitas Negeri Semarang. Batasan Spesifikasi Pengajuan Akun Pengajuan akun dan kerjasama penggunaan fasilitas AI Server ini dapat dilakukan menggunakan fasilitas layanan [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[116],"tags":[],"class_list":["post-4435","post","type-post","status-publish","format-standard","hentry","category-programming-id"],"_links":{"self":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts\/4435","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/comments?post=4435"}],"version-history":[{"count":5,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts\/4435\/revisions"}],"predecessor-version":[{"id":4443,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/posts\/4435\/revisions\/4443"}],"wp:attachment":[{"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/media?parent=4435"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/categories?post=4435"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/unnes.ac.id\/ictcenter\/wp-json\/wp\/v2\/tags?post=4435"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}