Transaksi kartu kredit yang semakin meningkat yang diikuti dengan maraknya tindak kecurangan memicu penelitian mengenai pengembangan model prediksi transaksi kartu kredit fraud. Data transaksi kartu kredit Doku digunakan menjadi sumber data pada penelitian. Penelitian ini melakukan pengembangan model prediksi serta webservice prediksi transaksi kartu kredit fraud. Fitur yang digunakan dalam pembuatan model adalah amount, payment bank issuer, payment bank acquirer, payment brand, payment 3D secure ECI, payment type, payment bank issuer country, dan hour. Model Decision Tree memberikan hasil terbaik dalam aspek precision dan F1-score dengan nilai 97.2% dan 96.8%. Model XGBoost memberikan hasil terbaik dalam aspek recall dan FP-rate dengan nilai 96.4% dan 3%. Kedua model tersebut sama-sama memperoleh nilai accuracy terbaik yaitu 96.7%. Dalam aspek webservice, model XGBoost memiliki performa terbaik dengan rata-rata throughput 77 request per detik.
The increasing amount of credit card transaction followed by fraudulent transaction becoming more rampant provokes many studies in fraud credit card transaction prediction model. Doku credit card transaction is used as data source for this study. This study experiments on developing model and webservice to predict fraud credit card transaction. Features used in builiding the model are amount, payment bank issuer, payment bankacquirer, payment brand, payment 3D secure ECI, payment type, payment bank issuer country, and hour. Decision Tree model achieves best precision and F1-score with 97.2% and 96.8% score. XGBoost model achieves best recall and FP-rate with 96.4% and 3% score. Both said model achieves same best accuracy with 96.7% score. In regards of the webservice, XGBoost achieves best performance with average throughput reaching 77 request per second.
"Multi object tracking is one of the most important topics of computer science that has many applications, such as surveillance system, navigation robot, sports analysis, autonomous driving car, and others. One of the main problems of multi-object tracking is occlusion. Occlusion is an object that is covered by other objects. Occlusion may cause the ID between objects to be switched. This study discusses occlusion on multi-object tracking and its completion with network flow. Given objects detection on each frame, the task of multi object tracking is to estimate the movement of objects and then connect the estimation objects corresponding to the objects in the next frame or well known as the data association. Notice that each object on a frame as a node, then there is an edge connecting each node on a frame with other frames, this architecture in graph theory is known as network flow. Then find the set of edges that provide the greatest probaility of transition from one frame to the next, or to the optimization problem well known as max-cost network flow. Edge contains information on how probabiltity a node moves to the node in the frame afterwards. This probability calculation is based on position distance and similarity feature between frames, the feature used is CNN feature. We modeled max-cost network flow as the maximum likelihood problem which was then solved with the Hungarian algorithm. The data used in this research is 2DMOT2015. Performance evaluation results show that the system built gives accuracy 20.1% with the ID switch is 3084 and fast computational process on 215.8 frame/second.
"Karya sastra merupakan hal yang perlu dilestarikan, karena melestarikan karya sastra juga berarti melestarikan bahasa. Upaya pelestarian dapat dilakukan dengan berbagai cara, salah satunya dengan memanfaatkan teknologi. Implementasi upaya yang dapat dilakukan dengan memanfaatkan teknologi adalah dengan melakukan ekstraksi entitas karya sastra secara otomatis. Dari data ekstraksi tersebut dapat dibangun knowledge base agar informasi menjadi lebih terstruktur dan dapat diatur dengan mudah. Penelitian ini menggunakan sumber data dari 435 halaman sastrawan Indonesia pada Wikipedia berbahasa Indonesia. Terdapat dua proses ekstraksi pada penelitian ini, yaitu ekstraksi daftar dan ekstraksi tabel. Pada akhir penelitian ini, diperoleh 4953 entitas karya sastra yang terpetakan ke dalam 14 kategori karya sastra. Kualitas hasil ekstraksi pada penelitian ini diukur dengan nilai precision dan recall. Nilai precision dan recall didapatkan dari hasil perbandingan data hasil ekstraksi dengan data golden result yang merupakan data yang disusun secara manual dari halaman-halaman sastrawan Indonesia. Nilai precision dan recall pada penelitian ini adalah 0.608 untuk precision dan 0.571 untuk recall.
Literature work needs to be preserved because it also means preserving a language. There are many preserving methods, one of them is using technology. The implementation of using technology as a preserving method is by automatically extracting the literature work entities. From that data extraction, a knowledge base can be built to make the information more structured and easy to manage. This research used 435 Wikipedia pages about Indonesian litterateur as a source of data extraction. Two extraction processes have been implemented, which are list extraction and table extraction. At the end of this research, 4953 literature work entities that mapped into 14 literature work categories were obtained. The quality of the data extraction results in this research was measured by precision and recall value. The precision and recall value was obtained from comparing the data extraction result with the golden result which is data that was organized manually from Wikipedia pages about Indonesian litterateur. The precision and recall value of this research are 0.608 for precision value and 0.571 for recall value.
"