Hasil Pencarian  ::  Simpan CSV :: Kembali

Hasil Pencarian

Ditemukan 3 dokumen yang sesuai dengan query
cover
Ari Nugroho
"ABSTRAK
Densely Connected Convolutional Networks (DenseNet) merupakan salah satu
model arsitektur Deep Learning yang menghubungkan setiap layer beserta feature-maps ke seluruh layer berikutnya, sehingga layer berikutnya menerima input
feature-maps dari seluruh layer sebelumnya. Karena padatnya arsitektur DenseNet
meyebabkan komputasi model memerlukan waktu lama dan pemakaian memory
GPU yang besar. Penelitian ini mengembangkan metode optimisasi DenseNet
menggunakan batching strategy yang bertujuan untuk mengatasi permasalahan
DenseNet dalam hal percepatan komputasi dan penghematan ruang memory GPU.
Batching strategy adalah metode yang digunakan dalam Stochastic Gradient
Descent (SGD) dimana metode tersebut menerapkan metode dinamik batching
dengan inisialisasi awal menggunakan ukuran batch kecil dan ditingkatkan
ukurannya secara adaptif selama training hingga sampai ukuran batch besar agar
terjadi peningkatan paralelisasi komputasi untuk mempercepat waktu pelatihan.
Metode batching strategy juga dilengkapi dengan manajemen memory GPU
menggunakan metode gradient accumulation. Dari hasil percobaan dan pengujian
terhadap metode tersebut dihasilkan peningkatan kecepatan waktu pelatihan hingga
1,7x pada dataset CIFAR-10 dan 1,5x pada dataset CIFAR-100 serta dapat
meningkatkan akurasi DenseNet. Manajemen memory yang digunakan dapat
menghemat memory GPU hingga 30% jika dibandingkan dengan native DenseNet.
Dataset yang digunakan menggunakan CIFAR-10 dan CIFAR-100 datasets.
Penerapan metode batching strategy tersebut terbukti dapat menghasilkan
percepatan dan penghematan ruang memory GPU.

ABSTRACT
Densely Connected Convolutional Networks (DenseNet) is one of the Deep
Learning architecture models that connect each layer and feature maps to all
subsequent layers so that the next layer receives input feature maps from all
previous layers. Because of its DenseNet architecture, computational models
require a long time and use large GPU memory. This research develops the
DenseNet optimization method using a batching strategy that aims to overcome the
DenseNet problem in terms of accelerating computing time and saving GPU
memory. Batching strategy is a method used in Stochastic Gradient Descent (SGD)
where the technique applies dynamic batching approach with initial initialization
using small batch sizes and adaptively increased size during training to large batch
sizes so that there is an increase in computational parallelization to speed up training
time. The batching strategy method is also equipped with GPU memory
management using the gradient accumulation method. From the results of
experiments and testing of these methods resulted in an increase in training time
speed of up to 1.7x on the CIFAR-10 dataset and 1.5x on the CIFAR-100 dataset
and can improve DenseNet accuracy. Memory management used can save GPU
memory up to 30% when compared to native DenseNet. The dataset used uses
CIFAR-10 and CIFAR-100 datasets. The application of the batching strategy
method is proven to be able to produce acceleration and saving of GPU memory."
2020
T-Pdf
UI - Tesis Membership  Universitas Indonesia Library
cover
Sakamoto, Kazuki
"This book shows you how ARC works and how best to incorporate it into your applications. Grand Central Dispatch (GCD) and blocks are key to developing great apps, allowing you to control threads for maximum performance.
If for you, multithreading is an unsolved mystery and ARC is unexplored territory, then this is the book you'll need to make these concepts clear and send you on your way to becoming a master iOS and OS X developer. What are blocks? How are they used with GCD? Multithreading with GCD. Managing objects with ARC."
New York : Springer, 2012
e20425596
eBooks  Universitas Indonesia Library
cover
"Practical load balancing presents an entire analytical framework to increase performance not just of one machine, but of your entire infrastructure.
Practical load balancing starts by introducing key concepts and the tools you'll need to tackle your load-balancing issues. You'll travel through the IP layers and learn how they can create increased network traffic for you. You'll see how to account for persistence and state, and how you can judge the performance of scheduling algorithms.
You'll then learn how to avoid performance degradation and any risk of the sudden disappearance of a service on a server. If you're concerned with running your load balancer for an entire network, you'll find out how to set up your network topography, and condense each topographical variety into recipes that will serve you in different situations. You'll also learn about individual servers, and load balancers that can perform cookie insertion or improve your SSL throughput.
You'll also explore load balancing in the modern context of the cloud. While load balancers need to be configured for high availability once the conditions on the network have been created, modern load balancing has found its way into the cloud, where good balancing is vital for the very functioning of the cloud, and where IPv6 is becoming ever more important."
New York: Springer, 2012
e20426558
eBooks  Universitas Indonesia Library