Accepted Papers


Regular Papers (Main Track)

  • Ewa Andrejczuk, Christian Blum, Filippo Bistaffa, Juan Antonio Rodriguez-Aguilar and Carles Sierra. Heterogeneous Teams for Homogeneous Performance
  • Ryuta Arisaka and Ken Satoh. Abstract Argumentation / Persuasion / Dynamics
  • Matteo Baldoni, Cristina Baroglio, Olivier Boissier, Katherine Marie May, Roberto Micalizio and Stefano Tedeschi. Accountability and Responsibility in Agents' Organizations
  • Davide Dell'Anna, Mehdi Dastani and Fabiano Dalpiaz. Runtime Norm Revision using Bayesian Networks
  • Xiuyi Fan. On Generating Explainable Plans with Assumption-based Argumentation
  • Xiuyi Fan. A Temporal Planning Example with Assumption-based Argumentation
  • Ferdinando Fioretto, Hong Xu, Sven Koenig and T. K. Satish Kumar. Solving Multiagent Constraint Optimization Problems on the Constraint Composite Graph
  • Benoit Gaudou, Dominique Longin, Patrick Taillandier and Marion Valette. Modeling a real-case situation of egress using BDI agents with emotions and social skills
  • Amy Greenwald, Jasper C.H. Lee and Takehiro Oyakawa. Fast Algorithms for Computing Interim Allocations in Single-Parameter Environments
  • Mingyu Guo, Yong Yang and Muhammad Ali Babar. Cost Sharing Security Information with Minimal Release Delay
  • Nguyen Duy Hung. Progressive Inference Algorithms for Probabilistic Argumentation
  • Anisse Ismaili, Tomoaki Yamaguchi and Makoto Yokoo. Student-Project-Resource Allocation: Complexity of the Symmetric Case
  • Anisse Ismaili. On Existence, Mixtures, Computation and Efficiency in Multi-objective Games
  • Hiroshi Kiyotake, Masahiro Kohjima, Tatsushi Matsubayashi and Hiroyuki Toda. Multi Agent Flow Estimation Based on Bayesian Optimization with Time Delay and Low Dimensional Parameter Conversion Trick
  • Quratul-Ain Mahesar, Nir Oren and Wamberto Vasconcelos. Computing Preferences in Abstract Argumentation
  • Martin Masek, Chiou Peng Lam, Lyndon Benke, Luke Kelly and Michael Papasimeon. Discovering Emergent Agent Behaviour with Evolutionary Finite State Machines
  • Kouki Matsumura, Tenda Okimoto and Katsutoshi Hirayama. Bounded Approximate Algorithm for Probabilistic Coalition Structure Generation
  • Ahmed Moustafa and Takayuki Ito. A Deep Reinforcement Learning Approach for Large-Scale Service Composition
  • Hideyuki Nagai and Setsuya Kurahashi. Realizing Two Types of Compact City - Street Activeness and Tramway -
  • Tenda Okimoto, Nicolas Schwind, Emir Demirović, Katsumi Inoue and Pierre Marquis. Robust Coalition Structure Generation
  • Matteo Pascucci and Kees van Berkel. Notions of Instrumentality in Agency Logic
  • Shaowen Peng, Xianzhong Xie, Tsunenori Mine and Chang Su. Vector representation based model considering randomness of user mobility for predicting potential users
  • Fredrik Präntare and Fredrik Heintz. An Anytime Algorithm for Simultaneous Coalition Structure Generation and Assignment
  • Kota Shigedomi, Tadashi Sekiguchi, Atsushi Iwasaki and Makoto Yokoo. Repeated Triangular Trade: Sustaining Circular Cooperation with Observation Errors
  • Seyed Amin Tabatabaei, Mark Hoogendoorn and Aart van Halteren. Narrowing Reinforcement Learning: Overcoming the Cold Start Problem for Personalized Health Interventions

Regular Papers (Social Science Track)

  • Hung Khanh Nguyen, Raymond Chiong, Manuel Chica Serrano, Richard H. Middletonand Dung Kim Pham. Contract farming in the Mekong Delta's rice supply chain: Insights from an agent-based modeling study
  • Tanzhe Tang and Caspar Chorus. Learning Opinions by Observing Actions: Simulation of Opinion Dynamics Using an Action-Opinion Inference Model

Short paper (Main Track)

  • Lyndon Benke, Michael Papasimeon and Kevin McDonald. Augmented Reality for Multi-Agent Simulation of Air Operations
  • Nicolas Bougie and Ryutaro Ichise. Abstracting Reinforcement Learning Agents with Prior Knowledge
  • Kevin Chapuis, Patrick Taillandier, Benoit Gaudou, Alexis Drogoul and Eric Daudé. A multi-modal urban traffic agent-based framework to study individual response to catastrophic events
  • Jérémie Dauphin and Ken Satoh. Dialogue games for enforcement of argument acceptance and rejection via attack removal
  • Elhadji Amadou Oury Diallo and Toshiharu Sugawara. Learning Strategic Group Formation for Coordinated Behavior in Adversarial Multi-Agent with Distributed Double DQN
  • Guido Governatori, Antonino Rotolo and Regis Riveret. A Deontic Argumentation Framework based on Deontic Defeasible Logic
  • Ali El Hassouni, Mark Hoogendoorn, Martijn van Otterlo and Eduardo Barbaro. Personalization of Health Interventions using Cluster-Based Reinforcement Learning
  • Ali El Hassouni, Mark Hoogendoorn and Vesa Muhonen. Using Generative Adversarial Networks to Develop a Realistic Human Behavior Simulator
  • Jingxian Huang, He Wang, Yao Zhang and Dengji Zhao. Simulations v.s. Human Playing in Repeated Prisoner’s Dilemma
  • Yuya Itoh and Shigeo Matsubara. Adaptive Budget Allocation for Sequential Tasks in Crowdsourcing
  • Erisa Karafili, Linna Wang, Antonis Kakas and Emil Lupu. Helping Forensics Analysts to Attribute Cyber-Attacks: An Argumentation-Based Reasoner
  • Shan Liu, Ahmed Moustafa and Takayuki Ito. Agent33: An Automated Negotiator with Heuristic Method for Searching Bids around Nash Bargaining Solution
  • Hang Luo and Shengzi Yang. How Possible it is Going to a war? An Agent-based Modeling and Simulation of Structural Balance and Alliance Strategy in International Network
  • Taiju Matsui, Satoshi Oyama and Masahito Kurihara. Effect of Viewing Directions on Deep Reinforcement Learning in 3D Virtual Environment Minecraft
  • Toshihiro Matsui and Hiroshi Matsuo. A Study of Relaxation Approaches for Asymmetric Constraint Optimization Problems
  • Felipe Meneguzzi, Ramon Fraga Pereira and Nir Oren. Sensor Placement for Plan Monitoring using Genetic Programming
  • Rym Mohamed, Zied Loukil and Zied Bouraoui. Qualitative-Based Reasoning in Possibilistic EL Ontology
  • Yasser Mohammad. FastVOI: Efficient Utility Elicitation During Negotiations
  • Lin Ni, Vicente Gonzalez, Jiamou Liu, Anass Rahouti, Libo Zhang and Bun Por Taing. An Agent-based Approach to Simulate Post-earthquake Indoor Crowd Evacuation
  • Shinpei Ogata, Yoshitaka Aoki, Hiroyuki Nakagawa and Kazuki Kobayashi. A Template System for Modeling and Verifying Agent Behaviors
  • Gideon Ogunniye, Alice Toniolo and Nir Oren. Meta-Argumentation Frameworks for Multi-party Dialogues
  • Francesco Olivieri, Guido Governatori, Matteo Cristani, Nick van Beest and Silvano Colombo Tosatto. Resource-driven Substructural Defeasible Logic
  • Mehdi Othmani-Guibourg, Amal El Fallah-Seghrouchni and Jean-Loup Farges. Decentralized multi-agent patrolling strategies using global idleness estimation
  • Poom Pianpak, Tran Cao Son and Zachary Oliver Toups. A Multi-Agent Simulator Environment Based on the Robot Operating System for Human-Robot Interaction Applications
  • Eriko Shimada, Shohei Yamane, Kotaro Ohori, Hiroaki Yamada and Shingo Takahashi. Agent-Based Simulation for Evaluation of Signage System Considering Expression Form in Airport Passenger Terminals and Other Large Facilities
  • Hitoshi Shimizu, Tatsushi Matsubayashi, Yusuke Tanaka, Tomoharu Iwata, Naonori Ueda and Hiroshi Sawada. Improving route traffic estimation by considering staying population
  • Daisuke Shiraishi, Kazuteru Miyazaki and Hiroaki Kobayashi. Proposal of Detour Path Suppression Method in PS Reinforcement Learning and its Application to Altruistic Multi-agent Environment
  • Shunki Takami, Kazunori Iwata, Nobuhiro Ito, Yohsuke Murase and Takeshi Uchitane. An environment for combinational experiments in a multi-agent simulation for disaster response
  • Hengjin Tang, Tatsuhi Matsubayashi, Daisuke Sato and Hiroyuki Toda. Time-Series Predictions for People-Flow with Simulation Data
  • Fumito Uwano and Keiki Takadama. Strategy for Learning Cooperative Behavior with Local Information for Multi-agent Systems
  • Lise-Marie Veillon, Gauvain Bourgne and Henry Soldano. Better Collective Learning with Consistency Guarantees