diff --git a/HACKING.md b/HACKING.md new file mode 100644 index 0000000..5290628 --- /dev/null +++ b/HACKING.md @@ -0,0 +1,5 @@ +# HACKING + +The primary source for everything is my Zotero instance. It exports a RDF file, which is then converted to markdown using XSLT. + +xsltproc to-markdown.xsl boardgame-research.rdf \ No newline at end of file diff --git a/HEADER.md b/HEADER.md new file mode 100644 index 0000000..cb3fda0 --- /dev/null +++ b/HEADER.md @@ -0,0 +1,20 @@ +# boardgame-research [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) + +This is a list of boardgame research. They are primarily related to "solving/playing/learning" games (by various different approaches), or +occasionaly about designing or meta-aspects of the game. This doesn't cover all aspects of each game (notably missing social-science stuff), but +should be of interest to anyone interested in boardgames and their optimal play. While there is a ton of easily accessible research on games like +Chess and Go, finding prior work on more contemporary games can be a bit hard. This list focuses on the latter. If you are interested in well-researched +games like Chess, Go, Hex, take a look at the [Chess programming wiki](https://www.chessprogramming.org/Games) instead. The list also covers some computer-games that fall under similar themes. + +An importable RDF version is available as well: + +- [Zotero RDF](boardgame-research.rdf) + +See Import instructions here: https://www.zotero.org/support/kb/importing_standardized_formats + +[Watch the repository](https://docs.github.com/en/github/managing-subscriptions-and-notifications-on-github/setting-up-notifications/configuring-notifications#configuring-your-watch-settings-for-an-individual-repository) to get the latest updates for now (Choose "All Activity"). + +If you aren't able to access any paper on this list, please [try using Sci-Hub](https://en.wikipedia.org/wiki/Sci-Hub) or [reach out to me](https://captnemo.in/contact/). + + + \ No newline at end of file diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..e4eaf86 --- /dev/null +++ b/Makefile @@ -0,0 +1,4 @@ +all: + xsltproc to-markdown.xsl boardgame-research.rdf > /tmp/contents.md + cat HEADER.md /tmp/contents.md > README.md + doctoc README.md \ No newline at end of file diff --git a/README.md b/README.md index f2c5dae..ced3cf9 100644 --- a/README.md +++ b/README.md @@ -6,32 +6,37 @@ should be of interest to anyone interested in boardgames and their optimal play. Chess and Go, finding prior work on more contemporary games can be a bit hard. This list focuses on the latter. If you are interested in well-researched games like Chess, Go, Hex, take a look at the [Chess programming wiki](https://www.chessprogramming.org/Games) instead. The list also covers some computer-games that fall under similar themes. -Exported versions are available in the following formats: +An importable RDF version is available as well: - [Zotero RDF](boardgame-research.rdf) -- [BibTeX](boardgame-research.bib) -Watch the repository to get the latest updates for now. +See Import instructions here: https://www.zotero.org/support/kb/importing_standardized_formats + +[Watch the repository](https://docs.github.com/en/github/managing-subscriptions-and-notifications-on-github/setting-up-notifications/configuring-notifications#configuring-your-watch-settings-for-an-individual-repository) to get the latest updates for now (Choose "All Activity"). If you aren't able to access any paper on this list, please [try using Sci-Hub](https://en.wikipedia.org/wiki/Sci-Hub) or [reach out to me](https://captnemo.in/contact/). -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* +- [2048](#2048) +- [Accessibility](#accessibility) - [Azul](#azul) - [Blokus](#blokus) - [Carcassonne](#carcassonne) - [Diplomacy](#diplomacy) - [Dixit](#dixit) - [Dominion](#dominion) +- [Frameworks](#frameworks) +- [Game Design](#game-design) - [Hanabi](#hanabi) - [Hive](#hive) - [Jenga](#jenga) - [Kingdomino](#kingdomino) - [Lost Cities](#lost-cities) - [Mafia](#mafia) -- [Magic: the Gathering](#magic-the-gathering) +- [Magic: The Gathering](#magic-the-gathering) +- [Mobile Games](#mobile-games) - [Modern Art: The card game](#modern-art-the-card-game) - [Monopoly](#monopoly) - [Monopoly Deal](#monopoly-deal) @@ -41,8 +46,8 @@ If you aren't able to access any paper on this list, please [try using Sci-Hub]( - [Pentago](#pentago) - [Quixo](#quixo) - [Race for the Galaxy](#race-for-the-galaxy) -- [The Resistance: Avalon](#the-resistance-avalon) -- [Risk](#risk) +- [Resistance: Avalon](#resistance-avalon) +- [RISK](#risk) - [Santorini](#santorini) - [Scotland Yard](#scotland-yard) - [Secret Hitler](#secret-hitler) @@ -53,250 +58,259 @@ If you aren't able to access any paper on this list, please [try using Sci-Hub]( - [Tetris Link](#tetris-link) - [Ticket to Ride](#ticket-to-ride) - [Ultimate Tic-Tac-Toe](#ultimate-tic-tac-toe) -- [Uno](#uno) +- [UNO](#uno) - [Yahtzee](#yahtzee) -- [Mobile Games](#mobile-games) - - [2048](#2048) -- [Game Design](#game-design) - - [Accessibility](#accessibility) -- [Frameworks/Toolkits](#frameworkstoolkits) +# 2048 +- [Systematic Selection of N-Tuple Networks for 2048](https://doi.org/10.1007%2F978-3-319-50935-8_8) (bookSection) +- [Systematic selection of N-tuple networks with consideration of interinfluence for game 2048](https://doi.org/10.1109%2Ftaai.2016.7880154) (conferencePaper) +- [An investigation into 2048 AI strategies](https://doi.org/10.1109%2Fcig.2014.6932920) (conferencePaper) +- [Threes!, Fives, 1024!, and 2048 are Hard](http://arxiv.org/abs/1505.04274v1) (journalArticle) +- [Making Change in 2048](http://arxiv.org/abs/1804.07396v1) (journalArticle) +- [Analysis of the Game "2048" and its Generalization in Higher Dimensions](http://arxiv.org/abs/1804.07393v2) (journalArticle) +- [Multi-Stage Temporal Difference Learning for 2048-like Games](http://arxiv.org/abs/1606.07374v2) (journalArticle) +- [2048 is (PSPACE) Hard, but Sometimes Easy](http://arxiv.org/abs/1408.6315v1) (journalArticle) +- [Temporal difference learning of N-tuple networks for the game 2048](http://ieeexplore.ieee.org/document/6932907/) (conferencePaper) +- [On the Complexity of Slide-and-Merge Games](http://arxiv.org/abs/1501.03837) (journalArticle) +- [2048 Without New Tiles Is Still Hard](http://drops.dagstuhl.de/opus/volltexte/2016/5885/) (journalArticle) + +# Accessibility +- [Meeple Centred Design: A Heuristic Toolkit for Evaluating the Accessibility of Tabletop Games](http://link.springer.com/10.1007/s40869-018-0057-8) (journalArticle) +- [Eighteen Months of Meeple Like Us: An Exploration into the State of Board Game Accessibility](http://link.springer.com/10.1007/s40869-018-0056-9) (journalArticle) + # Azul -- [A summary of a dissertation on Azul](https://old.reddit.com/r/boardgames/comments/hxodaf/update_i_wrote_my_dissertation_on_azul/) (unpublished) -- [Ceramic: A research environment based on the multi-player strategic board game Azul](https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=207669&item_no=1&attribute_id=1&file_no=1) [[GitHub](https://github.com/Swynfel/ceramic)] +- [A summary of a dissertation on Azul](https://old.reddit.com/r/boardgames/comments/hxodaf/update_i_wrote_my_dissertation_on_azul/) (report) +- [Ceramic: A research environment based on the multi-player strategic board game Azul](https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=207669&item_no=1&attribute_id=1&file_no=1) (conferencePaper) +- [Ceramic: A research environment based on the multi-player strategic board game Azul](https://github.com/Swynfel/ceramic) (computerProgram) # Blokus -- [Blokus Game Solver](https://digitalcommons.calpoly.edu/cpesp/290/) -- [FPGA Blokus Duo Solver using a massively parallel architecture](https://doi.org/10.1109/FPT.2013.6718426) -- [Blokus Duo game on FPGA](https://doi.org/10.1109/CADS.2013.6714256) +- [Blokus Game Solver](https://digitalcommons.calpoly.edu/cpesp/290/) (report) +- [FPGA Blokus Duo Solver using a massively parallel architecture](http://ieeexplore.ieee.org/document/6718426/) (conferencePaper) +- [Blokus Duo game on FPGA](http://ieeexplore.ieee.org/document/6714256/) (conferencePaper) # Carcassonne -- [Playing Carcassonne with Monte Carlo Tree Search](https://arxiv.org/abs/2009.12974) +- [Playing Carcassonne with Monte Carlo Tree Search](http://arxiv.org/abs/2009.12974) (journalArticle) # Diplomacy -- [Human-Level Performance in No-Press Diplomacy via Equilibrium Search](https://arxiv.org/abs/2010.02923) -- [Learning to Play No-Press Diplomacy with Best Response Policy Iteration ](https://arxiv.org/abs/2006.04635) -- [No Press Diplomacy: Modeling Multi-Agent Gameplay ](https://arxiv.org/abs/1909.02128) -- [Agent Madoff: A Heuristic-Based Negotiation Agent For The Diplomacy Strategy Game ](https://arxiv.org/abs/1902.06996) +- [Learning to Play No-Press Diplomacy with Best Response Policy Iteration](http://arxiv.org/abs/2006.04635v2) (journalArticle) +- [No Press Diplomacy: Modeling Multi-Agent Gameplay](http://arxiv.org/abs/1909.02128v2) (journalArticle) +- [Agent Madoff: A Heuristic-Based Negotiation Agent For The Diplomacy Strategy Game](http://arxiv.org/abs/1902.06996v1) (journalArticle) +- [Monte Carlo Tree Search for the Game of Diplomacy](https://dl.acm.org/doi/10.1145/3411408.3411413) (conferencePaper) +- [Human-Level Performance in No-Press Diplomacy via Equilibrium Search](http://arxiv.org/abs/2010.02923) (journalArticle) # Dixit -- [Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game](https://arxiv.org/abs/2010.00048) -- [Dixit: Interactive Visual Storytelling via Term Manipulation](https://arxiv.org/abs/1903.02230) +- [Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game](http://arxiv.org/abs/2010.00048) (journalArticle) +- [Dixit: Interactive Visual Storytelling via Term Manipulation](http://arxiv.org/abs/1903.02230) (journalArticle) # Dominion +- [Dominion Simulator](https://dominionsimulator.wordpress.com/f-a-q/) (computerProgram) +- [Dominion Simulator Source Code](https://github.com/mikemccllstr/dominionstats/) (computerProgram) +- [Best and worst openings in Dominion](http://councilroom.com/openings) (blogPost) +- [Optimal Card Ratios in Dominion](http://councilroom.com/optimal_card_ratios) (blogPost) +- [Card Winning Stats on Dominion Server](http://councilroom.com/supply_win) (blogPost) +- [Dominion Strategy Forum](http://forum.dominionstrategy.com/index.php) (forumPost) +- [Clustering Player Strategies from Variable-Length Game Logs in Dominion](http://arxiv.org/abs/1811.11273) (journalArticle) -There is a [simulator](https://dominionsimulator.wordpress.com/f-a-q/) and the code behind -[the Dominion server running councilroom.com](https://github.com/mikemccllstr/dominionstats/) is available. councilroom has the [best and worst openings](http://councilroom.com/openings), [optimal card ratios](http://councilroom.com/optimal_card_ratios), [Card winning stats](http://councilroom.com/supply_win) and lots of other empirical research. The [Dominion Strategy Forum](http://forum.dominionstrategy.com/index.php) is another good general resource. - -- [Clustering Player Strategies from Variable-Length Game Logs in Dominion](https://arxiv.org/abs/1811.11273) - -# Hanabi -- [Improving Policies via Search in Cooperative Partially Observable Games](https://arxiv.org/abs/1912.02318) (FB) [[code](https://github.com/facebookresearch/Hanabi_SPARTA)] - Current best result. -- [Re-determinizing MCTS in Hanabi](https://ieee-cog.org/2020/papers2019/paper_17.pdf) -- [Hanabi is NP-hard, Even for Cheaters who Look at Their Cards](https://arxiv.org/abs/1603.01911) -- [Evolving Agents for the Hanabi 2018 CIG Competition](https://ieeexplore.ieee.org/abstract/document/8490449) -- [Aspects of the Cooperative Card Game Hanabi](https://link.springer.com/chapter/10.1007/978-3-319-67468-1_7) -- [How to Make the Perfect Fireworks Display: Two Strategies for Hanabi](https://doi.org/10.4169/math.mag.88.5.323) -- [Playing Hanabi Near-Optimally](https://link.springer.com/chapter/10.1007/978-3-319-71649-7_5) -- [Evaluating and modelling Hanabi-playing agents](https://doi.org/10.1109/CEC.2017.7969465) -- [An intentional AI for hanabi](https://ieeexplore.ieee.org/abstract/document/8080417) -- [The Hanabi challenge: A new frontier for AI research](https://doi.org/10.1016/j.artint.2019.103216) [[arXiv](https://arxiv.org/abs/1902.00506)]] (DeepMind) -- [Solving Hanabi: Estimating Hands by Opponent's Actions in Cooperative Game with Incomplete Information](https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167/10193) -- [A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs](http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf) -- [I see what you see: Integrating eye tracking into Hanabi playing agents](http://www.exag.org/wp-content/uploads/2018/10/AIIDE-18_Upload_112.pdf) -- [The 2018 Hanabi competition](https://doi.org/10.1109/CIG.2019.8848008) -- [Diverse Agents for Ad-Hoc Cooperation in Hanabi](https://doi.org/10.1109/CIG.2019.8847944) [[arXiv](https://arxiv.org/pdf/2004.13710v2.pdf)] -- [State of the art Hanabi bots + simulation framework in rust](https://github.com/WuTheFWasThat/hanabi.rs) -- [A strategy simulator for the well-known cooperative card game Hanabi](https://github.com/rjtobin/HanSim) -- [A framework for writing bots that play Hanabi](https://github.com/Quuxplusone/Hanabi) -- [Evaluating the Rainbow DQN Agent in Hanabi with Unseen Partners](https://arxiv.org/abs/2004.13291) -- [Operationalizing Intentionality to Play Hanabi with Human Players](https://doi.org/10.1109/TG.2020.3009359) -- [Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents](https://ojs.aaai.org//index.php/AIIDE/article/view/7404) [[pdf](https://ojs.aaai.org/index.php/AIIDE/article/view/7404/7333)] -- [Playing mini-Hanabi card game with Q-learning](http://id.nii.ac.jp/1001/00205046/) -- [Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi](https://arxiv.org/abs/2004.13710) -- [Hanabi Open Agend Dataset](https://github.com/aronsar/hoad) - [[ACM](https://dl.acm.org/doi/abs/10.5555/3461017.3461244)] - -# Hive -- [On the complexity of Hive](https://dspace.library.uu.nl/handle/1874/396955) - -# Jenga -- [Maximum genus of the Jenga like configurations](https://arxiv.org/abs/1708.01503) -- [Jidoukan Jenga: Teaching English through remixing games and game rules](https://www.llpjournal.org/2020/04/13/jidokan-jenga.html) - -# Kingdomino -- [Monte Carlo Methods for the Game Kingdomino](https://doi.org/10.1109/CIG.2018.8490419) [[arXiv](https://arxiv.org/abs/1807.04458)] -- [NP-completeness of the game Kingdomino](https://arxiv.org/abs/1909.02849) - -# Lost Cities -- [Applying Neural Networks and Genetic Programming to the Game Lost Cities](http://digital.library.wisc.edu/1793/79080) - -# Mafia -- [A mathematical model of the Mafia game](https://arxiv.org/abs/1009.1031) -- [Automatic Long-Term Deception Detection in Group Interaction Videos](https://arxiv.org/abs/1905.08617) -- [Human-Side Strategies in the Werewolf Game Against the Stealth Werewolf Strategy](https://link.springer.com/chapter/10.1007/978-3-319-50935-8_9) -- [A Theoretical Study of Mafia Games](https://arxiv.org/abs/0804.0071) - -# Magic: the Gathering -- [Magic: the Gathering is as Hard as Arithmetic](https://arxiv.org/abs/2003.05119) -- [Magic: The Gathering is Turing Complete](https://arxiv.org/abs/1904.09828) -- [Neural Networks Models for Analyzing Magic: the Gathering Cards](https://link.springer.com/chapter/10.1007/978-3-030-04179-3_20) [[arXiv](https://arxiv.org/abs/1810.03744)] -- [The Complexity of Deciding Legality of a Single Step of Magic: the Gathering](https://livrepository.liverpool.ac.uk/3029568/1/magic.pdf) -- [Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering](https://doi.org/10.1109/TCIAIG.2012.2204883) -- [Magic: The Gathering in Common Lisp](https://pdfs.semanticscholar.org/5fc8/58802f19504ea950e20e31526dc2269b43d8.pdf) [[source](https://github.com/jeffythedragonslayer/maglisp)] -- [Deck Costruction Strategies for Magic: the Gathering](https://cab.unime.it/journals/index.php/congress/article/viewFile/141/141) -- [Deckbuilding in Magic: The Gathering Using a Genetic Algorithm](http://hdl.handle.net/11250/2462429) -- [Mathematical programming and Magic: The Gathering®](https://commons.lib.niu.edu/handle/10843/19194) -- [Optimal Card-Collecting Strategies for Magic: The Gathering](https://doi.org/10.1080/07468342.2000.11974103) -- [Monte Carlo search applied to card selection in Magic: The Gathering](https://doi.org/10.1109/CIG.2009.5286501) -- [Magic: The Gathering Deck Performance Prediction](http://cs229.stanford.edu/proj2012/HauPlotkinTran-MagicTheGatheringDeckPerformancePrediction.pdf) - -# Modern Art: The card game -- [A constraint programming based solver for Modern Art](https://github.com/captn3m0/modernart) - -# Monopoly -- [Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement Learning and Imitation Learning Approach](https://arxiv.org/abs/2103.00683) -- [Negotiation strategy of agents in the MONOPOLY game](https://ieeexplore.ieee.org/abstract/document/1013210) -- [Generating interesting Monopoly boards from open data](https://ieeexplore.ieee.org/abstract/document/6374168) -- [Estimating the probability that the game of Monopoly never ends](https://ieeexplore.ieee.org/abstract/document/5429349) -- [Learning to play Monopoly:A Reinforcement Learning approach](https://www.researchgate.net/profile/Anestis_Fachantidis/publication/289403522_Learning_to_play_monopoly_A_Reinforcement_learning_approach/links/59dd1f3e458515f6efef1904/Learning-to-play-monopoly-A-Reinforcement-learning-approach.pdf) -- [Monopoly as a Markov Process](https://doi.org/10.1080/0025570X.1972.11976187) -- [Learning to Play Monopoly withMonte Carlo Tree Search](https://project-archive.inf.ed.ac.uk/ug4/20181042/ug4_proj.pdf) -- [Monopoly Using Reinforcement Learning ](https://ieeexplore.ieee.org/abstract/document/8929523) -- [A Markovian Exploration of Monopoly](https://pi4.math.illinois.edu/wp-content/uploads/2014/10/Gartland-Burson-Ferguson-Markovopoly.pdf) -- [What's the best Monopoly strategy](https://publications.lakeforest.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1277&context=gss) - -# Monopoly Deal -- [Implementation of AI Player on "Monopoly Deal"](https://doi.org/10.1007/978-3-662-46742-8_11) - -# Nmbr9 -- [Nmbr9 as a Constraint Programming Challenge](https://zayenz.se/blog/post/nmbr9-cp2019-abstract/) - -# Pandemic -- [NP-Completeness of Pandemic](https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_article) - -# Patchwork -- [State Representation and Polyomino Placement for the Game Patchwork](https://zayenz.se/blog/post/patchwork-modref2019-paper/) - -# Pentago -- [On Solving Pentago](http://www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf) - -# Quixo -- [Quixo Is Solved](https://arxiv.org/abs/2007.15895) -- [QUIXO is EXPTIME-complete](https://doi.org/10.1016/j.ipl.2020.105995) - -# Race for the Galaxy -- [SCOUT: A Case-Based Reasoning Agent for Playing Race for the Galaxy](https://doi.org/10.1007/978-3-319-61030-6_27) - -# The Resistance: Avalon -- [Finding Friend and Foe in Multi-Agent Games](https://arxiv.org/abs/1906.02330) - -# Risk - -- [Mini-Risk: Strategies for a Simplified Board Game](https://doi.org/10.1057/jors.1990.2) -- [A Multi-Agent System for playing the board game Risk](https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A831093&dswid=-4740) -- [Learning the risk board game with classifier systems](https://doi.org/10.1145/508791.508904) -- [Markov Chains and the RISK Board Game](https://doi.org/10.1080/0025570X.1997.11996573) -- [Markov Chains for the RISK Board Game Revisited](https://doi.org/10.1080/0025570X.2003.11953165) -- [RISK Board Game ‐ Battle Outcome Analysis](http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf) -- [Planning an endgame move set for the game RISK](https://doi.org/10.1109/TEVC.2005.856211) -- [RISKy Business: An In-Depth Look at the Game RISK](https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3/) -- [An Intelligent Artificial Player for the Game of Risk](http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf) -- [Monte Carlo Tree Search for Risk](https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf) [[Presentation](https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf)] - -# Santorini -- [A Mathematical Analysis of the Game of Santorini](https://openworks.wooster.edu/independentstudy/8917/) - -# Scotland Yard -- [The complexity of Scotland Yard](http://www.illc.uva.nl/Research/Publications/Reports/PP-2006-18.text.pdf) - -# Secret Hitler -- [Competing in a Complex Hidden Role Game with Information Set Monte Carlo Tree Search](https://arxiv.org/abs/2005.07156) - -# Set -Set has a long history of mathematical research, so this list isn't exhaustive. - -- [Game, Set, Math](https://doi.org/10.4169/math.mag.85.2.083) -- [The Joy of SET](https://doi.org/10.1080/00029890.2018.1412661) - -# Settlers of Catan -- [The effectiveness of persuasion in The Settlers of Catan ](https://doi.org/10.1109/CIG.2014.6932861) -- [Avoiding Revenge Using Optimal Opponent Ranking Strategy in the Board Game Catan ](https://doi.org/10.4018/IJGCMS.2018040103) -- [Game strategies for The Settlers of Catan](https://doi.org/10.1109/CIG.2014.6932884) -- [Monte-Carlo Tree Search in Settlers of Catan](https://doi.org/10.1007/978-3-642-12993-3_3) -- [Settlers of Catan bot trained using reinforcement learning (MATLAB).](https://jonzia.github.io/Catan/) -- [Trading in a multiplayer board game: Towards an analysis of non-cooperative dialogue](https://escholarship.org/content/qt9zt506xx/qt9zt506xx.pdf) -- [POMCP with Human Preferencesin Settlers of Catan](https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217) -- [Reinforcement Learning of Strategies for Settlers of Catan](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.561.6293&rep=rep1&type=pdf) -- [Deep Reinforcement Learning in Strategic Board GameEnvironments](https://doi.org/10.1007/978-3-030-14174-5_16) [[pdf](https://hal.archives-ouvertes.fr/hal-02124411/document)] -- [Monte Carlo Tree Search in a Modern Board Game Framework](https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf) -- [The impact of loaded dice in Catan](https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html) -- [Playing Catan with Cross-dimensional Neural Network](https://arxiv.org/abs/2008.07079) -- [Strategic Dialogue Management via Deep Reinforcement Learning](https://arxiv.org/abs/1511.08099) - -# Shobu -- [Shobu AI Playground](https://github.com/JayWalker512/Shobu) -- [Shobu randomly played games dataset](https://www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k) - -# Terra Mystica -- [Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game](https://arxiv.org/abs/2006.02716) - -# [Tetris Link](https://boardgamegeek.com/boardgame/93185/tetris-link) -- [A New Challenge: Approaching Tetris Link with AI](https://arxiv.org/abs/2004.00377) - -# Ticket to Ride -- [Evolving maps and decks for ticket to ride](https://doi.org/10.1145/3235765.3235813) -- [Materials for Ticket to Ride Seattle and a framework for making more game boards](https://github.com/dovinmu/ttr_generator) -- [The Difficulty of Learning Ticket to Ride](https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf) -- [AI-based Playtesting of Contemporary Board Games](https://doi.org/10.1145/3102071.3102105) [[pdf](http://game.engineering.nyu.edu/wp-content/uploads/2017/06/ticket-ride-fdg2017-camera-ready.pdf)] [[presentation](https://www.rtealwitter.com/slides/2020-JMM.pdf)] - -# Ultimate Tic-Tac-Toe -- [At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe](https://arxiv.org/abs/2006.02353) - -# Uno -- [The complexity of UNO](https://arxiv.org/abs/1003.2851) -- [UNO Is Hard, Even for a Single Player](https://doi.org/10.1007/978-3-642-13122-6_15) - -# Yahtzee -- [Optimal Solitaire Yahtzee Strategies](http://www.yahtzee.org.uk/optimal_yahtzee_TV.pdf) -- [Nearly Optimal Computer Play in Multi-player Yahtzee](https://doi.org/10.1007/978-3-642-17928-0_23) -- [Computer Strategies for Solitaire Yahtzee](https://doi.org/10.1109/CIG.2007.368089) -- [An optimal strategy for Yahtzee](http://www.cs.loyola.edu/~jglenn/research/optimal_yahtzee.pdf) -- [Yahtzee: a Large Stochastic Environment for RL Benchmarks](https://pdfs.semanticscholar.org/f5c2/e9c9b17f584f060a73036109f697ac819a23.pdf) -- [Modeling expert problem solving in a game of chance: a Yahtzee case study](https://doi.org/10.1111/1468-0394.00160) -- [Probabilites In Yahtzee](https://doi.org/10.5951/MT.75.9.0751) -- [Optimal Yahtzee performance in multi-player games](https://www.diva-portal.org/smash/get/diva2:668705/FULLTEXT01.pdf) -- [Defensive Yahtzee](https://www.diva-portal.org/smash/get/diva2:817838/FULLTEXT01.pdf) -- [Using Deep Q-Learning to Compare Strategy Ladders of Yahtzee](https://pdfs.semanticscholar.org/6bec/1c34c8ace65adc95d39cb0c0e589ae392678.pdf) -- [How to Maximize Your Score in Solitaire Yahtzee](http://www-set.win.tue.nl/~wstomv/misc/yahtzee/yahtzee-report-unfinished.pdf) - -# Mobile Games -- [Trainyard is NP-Hard](https://arxiv.org/abs/1603.00928) -- [Threes!, Fives, 1024!, and 2048 are Hard](https://arxiv.org/abs/1505.04274) - -## 2048 -- [Making Change in 2048](https://arxiv.org/abs/1804.07396) -- [Analysis of the Game "2048" and its Generalization in Higher Dimensions](https://arxiv.org/abs/1804.07393) -- [Temporal difference learning of N-tuple networks for the game 2048](https://ieeexplore.ieee.org/abstract/document/6932907) -- [Multi-Stage Temporal Difference Learning for 2048-like Games](https://arxiv.org/abs/1606.07374) -- [On the Complexity of Slide-and-Merge Games](https://arxiv.org/abs/1501.03837) -- [2048 is (PSPACE) Hard, but Sometimes Eas](https://arxiv.org/abs/1408.6315) -- [Systematic Selection of N-Tuple Networks for 2048](https://doi.org/10.1007/978-3-319-50935-8_8) -- [Systematic selection of N-tuple networks with consideration of interinfluence for game 2048](https://doi.org/10.1109/TAAI.2016.7880154) -- [2048 Without New Tiles Is Still Hard](https://drops.dagstuhl.de/opus/volltexte/2016/5885/) -- [An investigation into 2048 AI strategies](https://doi.org/10.1109/CIG.2014.6932920) +# Frameworks +- [RLCard: A Toolkit for Reinforcement Learning in Card Games](http://arxiv.org/abs/1910.04376) (journalArticle) +- [Design and Implementation of TAG: A Tabletop Games Framework](http://arxiv.org/abs/2009.12065) (journalArticle) +- [Game Tree Search Algorithms - C++ library for AI bot programming.](https://github.com/AdamStelmaszczyk/gtsa) (computerProgram) +- [TAG: Tabletop Games Framework](https://github.com/GAIGResearch/TabletopGames) (computerProgram) # Game Design -- [MDA: A Formal Approach to Game Design and Game Research ](https://www.aaai.org/Papers/Workshops/2004/WS-04-04/WS04-04-001.pdf) -- [Exploring Anonymity in Cooperative Board Games](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.225.5554&rep=rep1&type=pdf) +- [MDA: A Formal Approach to Game Design and Game Research](https://aaai.org/Library/Workshops/2004/ws04-04-001.php) (conferencePaper) +- [Exploring anonymity in cooperative board games](http://www.digra.org/digital-library/publications/exploring-anonymity-in-cooperative-board-games/) (conferencePaper) -## Accessibility -- [Eighteen Months of Meeple Like Us: An Exploration into the State of Board Game Accessibility](https://doi.org/10.1007/s40869-018-0056-9) -- [Meeple Centred Design: A Heuristic Toolkit for Evaluating the Accessibility of Tabletop Games](https://doi.org/10.1007/s40869-018-0057-8) +# Hanabi +- [How to Make the Perfect Fireworks Display: Two Strategies forHanabi](https://doi.org/10.4169%2Fmath.mag.88.5.323) (journalArticle) +- [Evaluating and modelling Hanabi-playing agents](https://doi.org/10.1109%2Fcec.2017.7969465) (conferencePaper) +- [The Hanabi challenge: A new frontier for AI research](https://doi.org/10.1016%2Fj.artint.2019.103216) (journalArticle) +- [The 2018 Hanabi competition](https://doi.org/10.1109%2Fcig.2019.8848008) (conferencePaper) +- [Diverse Agents for Ad-Hoc Cooperation in Hanabi](https://doi.org/10.1109%2Fcig.2019.8847944) (conferencePaper) +- [Improving Policies via Search in Cooperative Partially Observable Games](http://arxiv.org/abs/1912.02318v1) (journalArticle) +- [Hanabi is NP-hard, Even for Cheaters who Look at Their Cards](http://arxiv.org/abs/1603.01911v3) (journalArticle) +- [Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi](http://arxiv.org/abs/2004.13710v2) (journalArticle) +- [Evaluating the Rainbow DQN Agent in Hanabi with Unseen Partners](http://arxiv.org/abs/2004.13291v1) (journalArticle) +- [Re-determinizing MCTS in Hanabi]() (conferencePaper) +- [Evolving Agents for the Hanabi 2018 CIG Competition](https://ieeexplore.ieee.org/document/8490449/) (conferencePaper) +- [Aspects of the Cooperative Card Game Hanabi](http://link.springer.com/10.1007/978-3-319-67468-1_7) (bookSection) +- [Playing Hanabi Near-Optimally](http://link.springer.com/10.1007/978-3-319-71649-7_5) (bookSection) +- [An intentional AI for hanabi](http://ieeexplore.ieee.org/document/8080417/) (conferencePaper) +- [Solving Hanabi: Estimating Hands by Opponent's Actions in Cooperative Game with Incomplete Information](https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167) (conferencePaper) +- [A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs](http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf) (journalArticle) +- [I see what you see: Integrating eye tracking into Hanabi playing agents]() (journalArticle) +- [State of the art Hanabi bots + simulation framework in rust](https://github.com/WuTheFWasThat/hanabi.rs) (computerProgram) +- [A strategy simulator for the well-known cooperative card game Hanabi](https://github.com/rjtobin/HanSim) (computerProgram) +- [A framework for writing bots that play Hanabi](https://github.com/Quuxplusone/Hanabi) (computerProgram) +- [Operationalizing Intentionality to Play Hanabi with Human Players](https://ieeexplore.ieee.org/document/9140404/) (journalArticle) +- [Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents](https://ojs.aaai.org/index.php/AIIDE/article/view/7404) (journalArticle) +- [Playing mini-Hanabi card game with Q-learning](http://id.nii.ac.jp/1001/00205046/) (conferencePaper) +- [Hanabi Open Agent Dataset](https://github.com/aronsar/hoad) (computerProgram) +- [Hanabi Open Agent Dataset](https://dl.acm.org/doi/10.5555/3463952.3464188) (conferencePaper) +- [Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi](http://arxiv.org/abs/2107.07630) (journalArticle) -# Frameworks/Toolkits -- [RLCard: A Toolkit for Reinforcement Learning in Card Games](https://arxiv.org/abs/1910.04376) -- [GTSA: Game Tree Search Algorithms](https://github.com/AdamStelmaszczyk/gtsa) -- [Design and Implementation of TAG: A Tabletop Games Framework](https://arxiv.org/abs/2009.12065) [[GitHub](https://github.com/GAIGResearch/TabletopGames)] -- [TAG: Tabletop Games Framework](https://github.com/GAIGResearch/TabletopGames) \ No newline at end of file +# Hive +- [On the complexity of Hive](https://dspace.library.uu.nl/handle/1874/396955) (thesis) + +# Jenga +- [Jidoukan Jenga: Teaching English through remixing games and game rules](https://www.llpjournal.org/2020/04/13/jidokan-jenga.html) (journalArticle) +- [Maximum genus of the Jenga like configurations](http://arxiv.org/abs/1708.01503) (journalArticle) + +# Kingdomino +- [Monte Carlo Methods for the Game Kingdomino](https://doi.org/10.1109%2Fcig.2018.8490419) (conferencePaper) +- [Monte Carlo Methods for the Game Kingdomino](http://arxiv.org/abs/1807.04458v2) (journalArticle) +- [NP-completeness of the game Kingdomino](http://arxiv.org/abs/1909.02849v3) (journalArticle) + +# Lost Cities +- [Applying Neural Networks and Genetic Programming to the Game Lost Cities](https://minds.wisconsin.edu/bitstream/handle/1793/79080/LydeenSpr18.pdf?sequence=1&isAllowed=y) (conferencePaper) + +# Mafia +- [A mathematical model of the Mafia game](http://arxiv.org/abs/1009.1031v3) (journalArticle) +- [Automatic Long-Term Deception Detection in Group Interaction Videos](http://arxiv.org/abs/1905.08617) (journalArticle) +- [Human-Side Strategies in the Werewolf Game Against the Stealth Werewolf Strategy](http://link.springer.com/10.1007/978-3-319-50935-8_9) (bookSection) +- [A Theoretical Study of Mafia Games](http://arxiv.org/abs/0804.0071) (journalArticle) + +# Magic: The Gathering +- [Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering](https://doi.org/10.1109%2Ftciaig.2012.2204883) (journalArticle) +- [Optimal Card-Collecting Strategies for Magic: The Gathering](https://doi.org/10.1080%2F07468342.2000.11974103) (journalArticle) +- [Monte Carlo search applied to card selection in Magic: The Gathering](https://doi.org/10.1109%2Fcig.2009.5286501) (conferencePaper) +- [Magic: the Gathering is as Hard as Arithmetic](http://arxiv.org/abs/2003.05119v1) (journalArticle) +- [Magic: The Gathering is Turing Complete](http://arxiv.org/abs/1904.09828v2) (journalArticle) +- [Neural Networks Models for Analyzing Magic: the Gathering Cards](http://arxiv.org/abs/1810.03744v1) (journalArticle) +- [Neural Networks Models for Analyzing Magic: The Gathering Cards](http://link.springer.com/10.1007/978-3-030-04179-3_20) (bookSection) +- [The Complexity of Deciding Legality of a Single Step of Magic: The Gathering](https://livrepository.liverpool.ac.uk/3029568/) (conferencePaper) +- [Magic: The Gathering in Common Lisp](https://vixra.org/abs/2001.0065) (conferencePaper) +- [Magic: The Gathering in Common Lisp](https://github.com/jeffythedragonslayer/maglisp) (computerProgram) +- [Mathematical programming and Magic: The Gathering](https://commons.lib.niu.edu/handle/10843/19194) (thesis) +- [Deck Construction Strategies for Magic: The Gathering](https://www.doi.org/10.1685/CSC06077) (conferencePaper) +- [Deckbuilding in Magic: The Gathering Using a Genetic Algorithm](https://doi.org/11250/2462429) (thesis) +- [Magic: The Gathering Deck Performance Prediction](http://cs229.stanford.edu/proj2012/HauPlotkinTran-MagicTheGatheringDeckPerformancePrediction.pdf) (report) + +# Mobile Games +- [Trainyard is NP-Hard](http://arxiv.org/abs/1603.00928v1) (journalArticle) +- [Threes!, Fives, 1024!, and 2048 are Hard](http://arxiv.org/abs/1505.04274v1) (journalArticle) + +# Modern Art: The card game +- [A constraint programming based solver for Modern Art](https://github.com/captn3m0/modernart) (computerProgram) + +# Monopoly +- [Monopoly as a Markov Process](https://doi.org/10.1080%2F0025570x.1972.11976187) (journalArticle) +- [Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement Learning and Imitation Learning Approach](http://arxiv.org/abs/2103.00683) (journalArticle) +- [Negotiation strategy of agents in the MONOPOLY game](http://ieeexplore.ieee.org/document/1013210/) (conferencePaper) +- [Generating interesting Monopoly boards from open data](http://ieeexplore.ieee.org/document/6374168/) (conferencePaper) +- [Estimating the probability that the game of Monopoly never ends](http://ieeexplore.ieee.org/document/5429349/) (conferencePaper) +- [Learning to Play Monopoly with Monte Carlo Tree Search](https://project-archive.inf.ed.ac.uk/ug4/20181042/ug4_proj.pdf) (report) +- [Monopoly Using Reinforcement Learning](https://ieeexplore.ieee.org/document/8929523/) (conferencePaper) +- [A Markovian Exploration of Monopoly](https://pi4.math.illinois.edu/wp-content/uploads/2014/10/Gartland-Burson-Ferguson-Markovopoly.pdf) (report) +- [Learning to play Monopoly: A Reinforcement Learning approach](https://intelligence.csd.auth.gr/publication/conference-papers/learning-to-play-monopoly-a-reinforcement-learning-approach/) (conferencePaper) +- [What’s the Best Monopoly Strategy?](https://core.ac.uk/download/pdf/48614184.pdf) (presentation) + +# Monopoly Deal +- [Implementation of Artificial Intelligence with 3 Different Characters of AI Player on “Monopoly Deal” Computer Game](https://doi.org/10.1007%2F978-3-662-46742-8_11) (bookSection) + +# Nmbr9 +- [Nmbr9 as a Constraint Programming Challenge](http://arxiv.org/abs/2001.04238) (journalArticle) +- [Nmbr9 as a Constraint Programming Challenge](https://zayenz.se/blog/post/nmbr9-cp2019-abstract/) (blogPost) + +# Pandemic +- [NP-Completeness of Pandemic](https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_article) (journalArticle) + +# Patchwork +- [State Representation and Polyomino Placement for the Game Patchwork](https://zayenz.se/blog/post/patchwork-modref2019-paper/) (blogPost) +- [State Representation and Polyomino Placement for the Game Patchwork](http://arxiv.org/abs/2001.04233) (journalArticle) +- [State Representation and Polyomino Placement for the Game Patchwork](https://zayenz.se/papers/Lagerkvist_ModRef_2019_Presentation.pdf) (presentation) + +# Pentago +- [On Solving Pentago](http://www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf) (thesis) + +# Quixo +- [QUIXO is EXPTIME-complete](https://doi.org/10.1016%2Fj.ipl.2020.105995) (journalArticle) +- [Quixo Is Solved](http://arxiv.org/abs/2007.15895) (journalArticle) + +# Race for the Galaxy +- [SCOUT: A Case-Based Reasoning Agent for Playing Race for the Galaxy](https://doi.org/10.1007%2F978-3-319-61030-6_27) (bookSection) + +# Resistance: Avalon +- [Finding Friend and Foe in Multi-Agent Games](http://arxiv.org/abs/1906.02330) (journalArticle) + +# RISK +- [Mini-Risk: Strategies for a Simplified Board Game](https://doi.org/10.1057%2Fjors.1990.2) (journalArticle) +- [Learning the risk board game with classifier systems](https://doi.org/10.1145%2F508791.508904) (conferencePaper) +- [Markov Chains and the RISK Board Game](https://doi.org/10.1080%2F0025570x.1997.11996573) (journalArticle) +- [Markov Chains for the RISK Board Game Revisited](https://doi.org/10.1080%2F0025570x.2003.11953165) (journalArticle) +- [Planning an Endgame Move Set for the Game RISK: A Comparison of Search Algorithms](https://doi.org/10.1109%2Ftevc.2005.856211) (journalArticle) +- [An Intelligent Artificial Player for the Game of Risk](http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf) (presentation) +- [RISKy Business: An In-Depth Look at the Game RISK](https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3) (journalArticle) +- [RISK Board Game ‐ Battle Outcome Analysis](http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf) (journalArticle) +- [A multi-agent system for playing the board game risk](http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3781) (thesis) +- [Monte Carlo Tree Search for Risk](https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf) (conferencePaper) +- [Wargaming with Monte-Carlo Tree Search](https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf) (presentation) + +# Santorini +- [A Mathematical Analysis of the Game of Santorini](https://openworks.wooster.edu/independentstudy/8917/) (thesis) +- [A Mathematical Analysis of the Game of Santorini](https://github.com/carsongeissler/SantoriniIS) (computerProgram) + +# Scotland Yard +- [The complexity of Scotland Yard](https://eprints.illc.uva.nl/id/eprint/193/1/PP-2006-18.text.pdf) (report) + +# Secret Hitler +- [Competing in a Complex Hidden Role Game with Information Set Monte Carlo Tree Search](http://arxiv.org/abs/2005.07156) (journalArticle) + +# Set +- [Game, Set, Math](https://doi.org/10.4169%2Fmath.mag.85.2.083) (journalArticle) +- [The Joy of SET](https://doi.org/10.1080%2F00029890.2018.1412661) (journalArticle) + +# Settlers of Catan +- [The effectiveness of persuasion in The Settlers of Catan](https://doi.org/10.1109%2Fcig.2014.6932861) (conferencePaper) +- [Avoiding Revenge Using Optimal Opponent Ranking Strategy in the Board Game Catan](https://doi.org/10.4018%2Fijgcms.2018040103) (journalArticle) +- [Game strategies for The Settlers of Catan](https://doi.org/10.1109%2Fcig.2014.6932884) (conferencePaper) +- [Monte-Carlo Tree Search in Settlers of Catan](https://doi.org/10.1007%2F978-3-642-12993-3_3) (bookSection) +- [Deep Reinforcement Learning in Strategic Board Game Environments](https://doi.org/10.1007%2F978-3-030-14174-5_16) (bookSection) +- [Settlers of Catan bot trained using reinforcement learning](https://jonzia.github.io/Catan/) (computerProgram) +- [Trading in a multiplayer board game: Towards an analysis of non-cooperative dialogue](https://escholarship.org/uc/item/9zt506xx) (conferencePaper) +- [POMCP with Human Preferencesin Settlers of Catan](https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217) (journalArticle) +- [The impact of loaded dice in Catan](https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html) (blogPost) +- [Monte Carlo Tree Search in a Modern Board Game Framework](https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf) (journalArticle) +- [Reinforcement Learning of Strategies for Settlers of Catan](https://www.researchgate.net/publication/228728063_Reinforcement_learning_of_strategies_for_Settlers_of_Catan) (conferencePaper) +- [Playing Catan with Cross-dimensional Neural Network](http://arxiv.org/abs/2008.07079) (journalArticle) +- [Strategic Dialogue Management via Deep Reinforcement Learning](http://arxiv.org/abs/1511.08099) (journalArticle) +- [Strategic Dialogue Management via Deep Reinforcement Learning](http://arxiv.org/abs/1511.08099) (journalArticle) + +# Shobu +- [Shobu AI Playground](https://github.com/JayWalker512/Shobu) (computerProgram) +- [Shobu randomly played games dataset](https://www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k) (webpage) + +# Terra Mystica +- [Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game](https://doi.org/10.1145%2F3396474.3396492) (conferencePaper) + +# Tetris Link +- [A New Challenge: Approaching Tetris Link with AI](http://arxiv.org/abs/2004.00377) (journalArticle) + +# Ticket to Ride +- [AI-based playtesting of contemporary board games](http://dl.acm.org/citation.cfm?doid=3102071.3102105) (conferencePaper) +- [Materials for Ticket to Ride Seattle and a framework for making more game boards](https://github.com/dovinmu/ttr_generator) (computerProgram) +- [The Difficulty of Learning Ticket to Ride](https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf) (report) +- [Evolving maps and decks for ticket to ride](https://dl.acm.org/doi/10.1145/3235765.3235813) (conferencePaper) +- [Applications of Graph Theory and Probability in the Board Game Ticket to Ride](https://www.rtealwitter.com/slides/2020-JMM.pdf) (presentation) + +# Ultimate Tic-Tac-Toe +- [At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe](http://arxiv.org/abs/2006.02353) (journalArticle) + +# UNO +- [UNO Is Hard, Even for a Single Player](https://doi.org/10.1007%2F978-3-642-13122-6_15) (bookSection) +- [The complexity of UNO](http://arxiv.org/abs/1003.2851v3) (journalArticle) + +# Yahtzee +- [Nearly Optimal Computer Play in Multi-player Yahtzee](https://doi.org/10.1007%2F978-3-642-17928-0_23) (bookSection) +- [Computer Strategies for Solitaire Yahtzee](https://doi.org/10.1109%2Fcig.2007.368089) (conferencePaper) +- [Modeling expert problem solving in a game of chance: a Yahtzeec case study](https://doi.org/10.1111%2F1468-0394.00160) (journalArticle) +- [Probabilites In Yahtzee](https://pubs.nctm.org/view/journals/mt/75/9/article-p751.xml) (journalArticle) +- [Optimal Solitaire Yahtzee Strategies](http://www.yahtzee.org.uk/optimal_yahtzee_TV.pdf) (presentation) +- [Yahtzee: a Large Stochastic Environment for RL Benchmarks](http://researchers.lille.inria.fr/~lazaric/Webpage/PublicationsByTopic_files/bonarini2005yahtzee.pdf) (journalArticle) +- [Optimal Yahtzee performance in multi-player games](https://www.csc.kth.se/utbildning/kth/kurser/DD143X/dkand13/Group4Per/report/12-serra-widell-nigata.pdf) (thesis) +- [How to Maximize Your Score in Solitaire Yahtzee](http://www-set.win.tue.nl/~wstomv/misc/yahtzee/yahtzee-report-unfinished.pdf) (manuscript) +- [Using Deep Q-Learning to Compare Strategy Ladders of Yahtzee](https://raw.githubusercontent.com/philvasseur/Yahtzee-DQN-Thesis/dcf2bfe15c3b8c0ff3256f02dd3c0aabdbcbc9bb/webpage/final_report.pdf) (thesis) +- [Defensive Yahtzee](http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168668) (report) +- [An Optimal Strategy for Yahtzee](http://www.cs.loyola.edu/~jglenn/research/optimal_yahtzee.pdf) (report) \ No newline at end of file diff --git a/boardgame-research.bib b/boardgame-research.bib deleted file mode 100644 index a1effd7..0000000 --- a/boardgame-research.bib +++ /dev/null @@ -1,1159 +0,0 @@ - -@inproceedings{guhe_effectiveness_2014, - address = {Dortmund, Germany}, - title = {The effectiveness of persuasion in {The} {Settlers} of {Catan}}, - isbn = {978-1-4799-3547-5}, - url = {http://ieeexplore.ieee.org/document/6932861/}, - doi = {10.1109/CIG.2014.6932861}, - urldate = {2020-07-20}, - booktitle = {2014 {IEEE} {Conference} on {Computational} {Intelligence} and {Games}}, - publisher = {IEEE}, - author = {Guhe, Markus and Lascarides, Alex}, - month = aug, - year = {2014}, - pages = {1--8}, - file = {Submitted Version:/home/nemo/Zotero/storage/ITK52TEL/Guhe and Lascarides - 2014 - The effectiveness of persuasion in The Settlers of.pdf:application/pdf;Submitted Version:/home/nemo/Zotero/storage/HTNFAWSA/Guhe and Lascarides - 2014 - The effectiveness of persuasion in The Settlers of.pdf:application/pdf} -} - -@inproceedings{guhe_effectiveness_2014-1, - title = {The effectiveness of persuasion in {The} {Settlers} of {Catan}}, - url = {https://doi.org/10.1109%2Fcig.2014.6932861}, - doi = {10.1109/cig.2014.6932861}, - booktitle = {2014 {IEEE} {Conference} on {Computational} {Intelligence} and {Games}}, - publisher = {IEEE}, - author = {Guhe, Markus and Lascarides, Alex}, - month = aug, - year = {2014}, - file = {Submitted Version:/home/nemo/Zotero/storage/WJWTC8A9/Guhe and Lascarides - 2014 - The effectiveness of persuasion in The Settlers of.pdf:application/pdf} -} - -@article{boda_avoiding_2018, - title = {Avoiding {Revenge} {Using} {Optimal} {Opponent} {Ranking} {Strategy} in the {Board} {Game} {Catan}}, - volume = {10}, - url = {https://doi.org/10.4018%2Fijgcms.2018040103}, - doi = {10.4018/ijgcms.2018040103}, - number = {2}, - journal = {International Journal of Gaming and Computer-Mediated Simulations}, - author = {Boda, Márton Attila}, - month = apr, - year = {2018}, - note = {Publisher: IGI Global}, - pages = {47--70}, - file = {Full Text:/home/nemo/Zotero/storage/C8XZ64B8/Boda - 2018 - Avoiding Revenge Using Optimal Opponent Ranking St.pdf:application/pdf} -} - -@inproceedings{guhe_game_2014, - title = {Game strategies for {The} {Settlers} of {Catan}}, - url = {https://doi.org/10.1109%2Fcig.2014.6932884}, - doi = {10.1109/cig.2014.6932884}, - booktitle = {2014 {IEEE} {Conference} on {Computational} {Intelligence} and {Games}}, - publisher = {IEEE}, - author = {Guhe, Markus and Lascarides, Alex}, - month = aug, - year = {2014}, - file = {Submitted Version:/home/nemo/Zotero/storage/E7ARZVSI/Guhe and Lascarides - 2014 - Game strategies for The Settlers of Catan.pdf:application/pdf} -} - -@incollection{szita_monte-carlo_2010, - title = {Monte-{Carlo} {Tree} {Search} in {Settlers} of {Catan}}, - url = {https://doi.org/10.1007%2F978-3-642-12993-3_3}, - booktitle = {Lecture {Notes} in {Computer} {Science}}, - publisher = {Springer Berlin Heidelberg}, - author = {Szita, István and Chaslot, Guillaume and Spronck, Pieter}, - year = {2010}, - doi = {10.1007/978-3-642-12993-3_3}, - pages = {21--32}, - file = {Full Text:/home/nemo/Zotero/storage/T5DQM5GD/Szita et al. - 2010 - Monte-Carlo Tree Search in Settlers of Catan.pdf:application/pdf} -} - -@incollection{xenou_deep_2019, - title = {Deep {Reinforcement} {Learning} in {Strategic} {Board} {Game} {Environments}}, - url = {https://doi.org/10.1007%2F978-3-030-14174-5_16}, - booktitle = {Multi-{Agent} {Systems}}, - publisher = {Springer International Publishing}, - author = {Xenou, Konstantia and Chalkiadakis, Georgios and Afantenos, Stergos}, - year = {2019}, - doi = {10.1007/978-3-030-14174-5_16}, - pages = {233--248}, - file = {Accepted Version:/home/nemo/Zotero/storage/MXL35B77/Xenou et al. - 2019 - Deep Reinforcement Learning in Strategic Board Gam.pdf:application/pdf} -} - -@article{maliphant_mini-risk_1990, - title = {Mini-{Risk}: {Strategies} for a {Simplified} {Board} {Game}}, - volume = {41}, - url = {https://doi.org/10.1057%2Fjors.1990.2}, - doi = {10.1057/jors.1990.2}, - number = {1}, - journal = {Journal of the Operational Research Society}, - author = {Maliphant, Sarah A. and Smith, David K.}, - month = jan, - year = {1990}, - note = {Publisher: Informa UK Limited}, - pages = {9--16}, - file = {Full Text:/home/nemo/Zotero/storage/X6ZHBQT8/Maliphant and Smith - 1990 - Mini-Risk Strategies for a Simplified Board Game.pdf:application/pdf} -} - -@inproceedings{neves_learning_2002, - title = {Learning the risk board game with classifier systems}, - url = {https://doi.org/10.1145%2F508791.508904}, - doi = {10.1145/508791.508904}, - booktitle = {Proceedings of the 2002 {ACM} symposium on {Applied} computing - {SAC} {\textbackslash}textquotesingle02}, - publisher = {ACM Press}, - author = {Neves, Atila and Brasāo, Osvaldo and Rosa, Agostinho}, - year = {2002}, - file = {Full Text:/home/nemo/Zotero/storage/V8H6XI4Y/Neves et al. - 2002 - Learning the risk board game with classifier syste.pdf:application/pdf} -} - -@article{tan_markov_1997, - title = {Markov {Chains} and the {RISK} {Board} {Game}}, - volume = {70}, - url = {https://doi.org/10.1080%2F0025570x.1997.11996573}, - doi = {10.1080/0025570x.1997.11996573}, - number = {5}, - journal = {Mathematics Magazine}, - author = {Tan, Bariş}, - month = dec, - year = {1997}, - note = {Publisher: Informa UK Limited}, - pages = {349--357}, - file = {Full Text:/home/nemo/Zotero/storage/ZS9TP9BZ/Tan - 1997 - Markov Chains and the RISK Board Game.pdf:application/pdf} -} - -@article{osborne_markov_2003, - title = {Markov {Chains} for the {RISK} {Board} {Game} {Revisited}}, - volume = {76}, - url = {https://doi.org/10.1080%2F0025570x.2003.11953165}, - doi = {10.1080/0025570x.2003.11953165}, - number = {2}, - journal = {Mathematics Magazine}, - author = {Osborne, Jason A.}, - month = apr, - year = {2003}, - note = {Publisher: Informa UK Limited}, - pages = {129--135}, - file = {Full Text:/home/nemo/Zotero/storage/FI6LTX8L/Osborne - 2003 - Markov Chains for the RISK Board Game Revisited.pdf:application/pdf} -} - -@article{vaccaro_planning_2005, - title = {Planning an {Endgame} {Move} {Set} for the {Game} {RISK}: {A} {Comparison} of {Search} {Algorithms}}, - volume = {9}, - url = {https://doi.org/10.1109%2Ftevc.2005.856211}, - doi = {10.1109/tevc.2005.856211}, - number = {6}, - journal = {IEEE Trans. Evol. Computat.}, - author = {Vaccaro, J. M. and Guest, C. C.}, - month = dec, - year = {2005}, - note = {Publisher: Institute of Electrical and Electronics Engineers (IEEE)}, - pages = {641--652}, - file = {Full Text:/home/nemo/Zotero/storage/8Q7W86Z4/Vaccaro and Guest - 2005 - Planning an Endgame Move Set for the Game RISK A .pdf:application/pdf} -} - -@inproceedings{gedda_monte_2018, - title = {Monte {Carlo} {Methods} for the {Game} {Kingdomino}}, - url = {https://doi.org/10.1109%2Fcig.2018.8490419}, - doi = {10.1109/cig.2018.8490419}, - booktitle = {2018 {IEEE} {Conference} on {Computational} {Intelligence} and {Games} ({CIG})}, - publisher = {IEEE}, - author = {Gedda, Magnus and Lagerkvist, Mikael Z. and Butler, Martin}, - month = aug, - year = {2018}, - file = {Submitted Version:/home/nemo/Zotero/storage/BKV7VG59/Gedda et al. - 2018 - Monte Carlo Methods for the Game Kingdomino.pdf:application/pdf} -} - -@article{cox_how_2015, - title = {How to {Make} the {Perfect} {Fireworks} {Display}: {Two} {Strategies} {forHanabi}}, - volume = {88}, - url = {https://doi.org/10.4169%2Fmath.mag.88.5.323}, - doi = {10.4169/math.mag.88.5.323}, - number = {5}, - journal = {Mathematics Magazine}, - author = {Cox, Christopher and Silva, Jessica De and Deorsey, Philip and Kenter, Franklin H. J. and Retter, Troy and Tobin, Josh}, - month = dec, - year = {2015}, - note = {Publisher: Informa UK Limited}, - pages = {323--336}, - file = {Full Text:/home/nemo/Zotero/storage/E7PR5FAI/Cox et al. - 2015 - How to Make the Perfect Fireworks Display Two Str.pdf:application/pdf} -} - -@inproceedings{walton-rivers_evaluating_2017, - title = {Evaluating and modelling {Hanabi}-playing agents}, - url = {https://doi.org/10.1109%2Fcec.2017.7969465}, - doi = {10.1109/cec.2017.7969465}, - booktitle = {2017 {IEEE} {Congress} on {Evolutionary} {Computation} ({CEC})}, - publisher = {IEEE}, - author = {Walton-Rivers, Joseph and Williams, Piers R. and Bartle, Richard and Perez-Liebana, Diego and Lucas, Simon M.}, - month = jun, - year = {2017}, - file = {Accepted Version:/home/nemo/Zotero/storage/6LVCR5LJ/Walton-Rivers et al. - 2017 - Evaluating and modelling Hanabi-playing agents.pdf:application/pdf} -} - -@article{bard_hanabi_2020, - title = {The {Hanabi} challenge: {A} new frontier for {AI} research}, - volume = {280}, - url = {https://doi.org/10.1016%2Fj.artint.2019.103216}, - doi = {10.1016/j.artint.2019.103216}, - journal = {Artificial Intelligence}, - author = {Bard, Nolan and Foerster, Jakob N. and Chandar, Sarath and Burch, Neil and Lanctot, Marc and Song, H. Francis and Parisotto, Emilio and Dumoulin, Vincent and Moitra, Subhodeep and Hughes, Edward and Dunning, Iain and Mourad, Shibl and Larochelle, Hugo and Bellemare, Marc G. and Bowling, Michael}, - month = mar, - year = {2020}, - note = {Publisher: Elsevier BV}, - pages = {103216}, - file = {Full Text:/home/nemo/Zotero/storage/QK4PLTNC/Bard et al. - 2020 - The Hanabi challenge A new frontier for AI resear.pdf:application/pdf} -} - -@inproceedings{walton-rivers_2018_2019, - title = {The 2018 {Hanabi} competition}, - url = {https://doi.org/10.1109%2Fcig.2019.8848008}, - doi = {10.1109/cig.2019.8848008}, - booktitle = {2019 {IEEE} {Conference} on {Games} ({CoG})}, - publisher = {IEEE}, - author = {Walton-Rivers, Joseph and Williams, Piers R. and Bartle, Richard}, - month = aug, - year = {2019}, - file = {Accepted Version:/home/nemo/Zotero/storage/EG5MFSFH/Walton-Rivers et al. - 2019 - The 2018 Hanabi competition.pdf:application/pdf} -} - -@inproceedings{canaan_diverse_2019, - title = {Diverse {Agents} for {Ad}-{Hoc} {Cooperation} in {Hanabi}}, - url = {https://doi.org/10.1109%2Fcig.2019.8847944}, - doi = {10.1109/cig.2019.8847944}, - booktitle = {2019 {IEEE} {Conference} on {Games} ({CoG})}, - publisher = {IEEE}, - author = {Canaan, Rodrigo and Togelius, Julian and Nealen, Andy and Menzel, Stefan}, - month = aug, - year = {2019}, - file = {Submitted Version:/home/nemo/Zotero/storage/9WT5YA3E/Canaan et al. - 2019 - Diverse Agents for Ad-Hoc Cooperation in Hanabi.pdf:application/pdf} -} - -@article{ash_monopoly_1972, - title = {Monopoly as a {Markov} {Process}}, - volume = {45}, - url = {https://doi.org/10.1080%2F0025570x.1972.11976187}, - doi = {10.1080/0025570x.1972.11976187}, - number = {1}, - journal = {Mathematics Magazine}, - author = {Ash, Robert B. and Bishop, Richard L.}, - month = jan, - year = {1972}, - note = {Publisher: Informa UK Limited}, - pages = {26--29}, - file = {Submitted Version:/home/nemo/Zotero/storage/KZZXN75I/Ash and Bishop - 1972 - Monopoly as a Markov Process.pdf:application/pdf} -} - -@article{cowling_ensemble_2012, - title = {Ensemble {Determinization} in {Monte} {Carlo} {Tree} {Search} for the {Imperfect} {Information} {Card} {Game} {Magic}: {The} {Gathering}}, - volume = {4}, - url = {https://doi.org/10.1109%2Ftciaig.2012.2204883}, - doi = {10.1109/tciaig.2012.2204883}, - number = {4}, - journal = {IEEE Trans. Comput. Intell. AI Games}, - author = {Cowling, Peter I. and Ward, Colin D. and Powley, Edward J.}, - month = dec, - year = {2012}, - note = {Publisher: Institute of Electrical and Electronics Engineers (IEEE)}, - pages = {241--257}, - file = {Accepted Version:/home/nemo/Zotero/storage/JI5MQ857/Cowling et al. - 2012 - Ensemble Determinization in Monte Carlo Tree Searc.pdf:application/pdf} -} - -@article{bosch_optimal_2000, - title = {Optimal {Card}-{Collecting} {Strategies} for {Magic}: {The} {Gathering}}, - volume = {31}, - url = {https://doi.org/10.1080%2F07468342.2000.11974103}, - doi = {10.1080/07468342.2000.11974103}, - number = {1}, - journal = {The College Mathematics Journal}, - author = {Bosch, Robert A.}, - month = jan, - year = {2000}, - note = {Publisher: Informa UK Limited}, - pages = {15--21}, - file = {Full Text:/home/nemo/Zotero/storage/A6L5BUGS/Bosch - 2000 - Optimal Card-Collecting Strategies for Magic The .pdf:application/pdf} -} - -@inproceedings{ward_monte_2009, - title = {Monte {Carlo} search applied to card selection in {Magic}: {The} {Gathering}}, - url = {https://doi.org/10.1109%2Fcig.2009.5286501}, - doi = {10.1109/cig.2009.5286501}, - booktitle = {2009 {IEEE} {Symposium} on {Computational} {Intelligence} and {Games}}, - publisher = {IEEE}, - author = {Ward, C. D. and Cowling, P. I.}, - month = sep, - year = {2009}, - file = {Full Text:/home/nemo/Zotero/storage/GR28QUPQ/Ward and Cowling - 2009 - Monte Carlo search applied to card selection in Ma.pdf:application/pdf} -} - -@incollection{demaine_is_2010, - title = {{UNO} {Is} {Hard}, {Even} for a {Single} {Player}}, - url = {https://doi.org/10.1007%2F978-3-642-13122-6_15}, - booktitle = {Lecture {Notes} in {Computer} {Science}}, - publisher = {Springer Berlin Heidelberg}, - author = {Demaine, Erik D. and Demaine, Martin L. and Uehara, Ryuhei and Uno, Takeaki and Uno, Yushi}, - year = {2010}, - doi = {10.1007/978-3-642-13122-6_15}, - pages = {133--144}, - file = {Submitted Version:/home/nemo/Zotero/storage/75SB8JSY/Demaine et al. - 2010 - UNO Is Hard, Even for a Single Player.pdf:application/pdf} -} - -@article{mishiba_quixo_2020, - title = {{QUIXO} is {EXPTIME}-complete}, - url = {https://doi.org/10.1016%2Fj.ipl.2020.105995}, - doi = {10.1016/j.ipl.2020.105995}, - journal = {Information Processing Letters}, - author = {Mishiba, Shohei and Takenaga, Yasuhiko}, - month = jul, - year = {2020}, - note = {Publisher: Elsevier BV}, - pages = {105995}, - file = {Full Text:/home/nemo/Zotero/storage/I6S8CB93/Mishiba and Takenaga - 2020 - QUIXO is EXPTIME-complete.pdf:application/pdf} -} - -@incollection{woolford_scout_2017, - title = {{SCOUT}: {A} {Case}-{Based} {Reasoning} {Agent} for {Playing} {Race} for the {Galaxy}}, - url = {https://doi.org/10.1007%2F978-3-319-61030-6_27}, - booktitle = {Case-{Based} {Reasoning} {Research} and {Development}}, - publisher = {Springer International Publishing}, - author = {Woolford, Michael and Watson, Ian}, - year = {2017}, - doi = {10.1007/978-3-319-61030-6_27}, - pages = {390--402}, - file = {Woolford and Watson - 2017 - SCOUT A Case-Based Reasoning Agent for Playing Ra.pdf:/home/nemo/Zotero/storage/LMIXD5XY/Woolford and Watson - 2017 - SCOUT A Case-Based Reasoning Agent for Playing Ra.pdf:application/pdf} -} - -@article{coleman_game_2012, - title = {Game, {Set}, {Math}}, - volume = {85}, - url = {https://doi.org/10.4169%2Fmath.mag.85.2.083}, - doi = {10.4169/math.mag.85.2.083}, - number = {2}, - journal = {Mathematics Magazine}, - author = {Coleman, Ben and Hartshorn, Kevin}, - month = apr, - year = {2012}, - note = {Publisher: Informa UK Limited}, - pages = {83--96}, - file = {Full Text:/home/nemo/Zotero/storage/UZL88CQ4/Coleman and Hartshorn - 2012 - Game, Set, Math.pdf:application/pdf} -} - -@article{glass_joy_2018, - title = {The {Joy} of {SET}}, - volume = {125}, - url = {https://doi.org/10.1080%2F00029890.2018.1412661}, - doi = {10.1080/00029890.2018.1412661}, - number = {3}, - journal = {The American Mathematical Monthly}, - author = {Glass, Darren}, - month = feb, - year = {2018}, - note = {Publisher: Informa UK Limited}, - pages = {284--288}, - file = {Full Text:/home/nemo/Zotero/storage/ID46MICU/Glass - 2018 - The Joy of SET.pdf:application/pdf} -} - -@incollection{lazarusli_implementation_2015, - title = {Implementation of {Artificial} {Intelligence} with 3 {Different} {Characters} of {AI} {Player} on “{Monopoly} {Deal}” {Computer} {Game}}, - url = {https://doi.org/10.1007%2F978-3-662-46742-8_11}, - booktitle = {Communications in {Computer} and {Information} {Science}}, - publisher = {Springer Berlin Heidelberg}, - author = {Lazarusli, Irene A. and Lukas, Samuel and Widjaja, Patrick}, - year = {2015}, - doi = {10.1007/978-3-662-46742-8_11}, - pages = {119--127} -} - -@incollection{pawlewicz_nearly_2011, - title = {Nearly {Optimal} {Computer} {Play} in {Multi}-player {Yahtzee}}, - url = {https://doi.org/10.1007%2F978-3-642-17928-0_23}, - booktitle = {Computers and {Games}}, - publisher = {Springer Berlin Heidelberg}, - author = {Pawlewicz, Jakub}, - year = {2011}, - doi = {10.1007/978-3-642-17928-0_23}, - pages = {250--262} -} - -@inproceedings{glenn_computer_2007, - title = {Computer {Strategies} for {Solitaire} {Yahtzee}}, - url = {https://doi.org/10.1109%2Fcig.2007.368089}, - doi = {10.1109/cig.2007.368089}, - booktitle = {2007 {IEEE} {Symposium} on {Computational} {Intelligence} and {Games}}, - publisher = {IEEE}, - author = {Glenn, James R.}, - year = {2007}, - file = {Submitted Version:/home/nemo/Zotero/storage/GPCGB5MW/Glenn - 2007 - Computer Strategies for Solitaire Yahtzee.pdf:application/pdf} -} - -@article{maynard_modeling_2001, - title = {Modeling expert problem solving in a game of chance: a {Yahtzeec} case study}, - volume = {18}, - url = {https://doi.org/10.1111%2F1468-0394.00160}, - doi = {10.1111/1468-0394.00160}, - number = {2}, - journal = {Expert Systems}, - author = {Maynard, Ken and Moss, Patrick and Whitehead, Marcus and Narayanan, S. and Garay, Matt and Brannon, Nathan and Kantamneni, Raj Gopal and Kustra, Todd}, - month = may, - year = {2001}, - note = {Publisher: Wiley}, - pages = {88--98}, - file = {Full Text:/home/nemo/Zotero/storage/PG6NUX5X/Maynard et al. - 2001 - Modeling expert problem solving in a game of chanc.pdf:application/pdf} -} - -@incollection{oka_systematic_2016, - title = {Systematic {Selection} of {N}-{Tuple} {Networks} for 2048}, - url = {https://doi.org/10.1007%2F978-3-319-50935-8_8}, - booktitle = {Computers and {Games}}, - publisher = {Springer International Publishing}, - author = {Oka, Kazuto and Matsuzaki, Kiminori}, - year = {2016}, - doi = {10.1007/978-3-319-50935-8_8}, - pages = {81--92}, - file = {Full Text:/home/nemo/Zotero/storage/DAJB4HAP/Oka and Matsuzaki - 2016 - Systematic Selection of N-Tuple Networks for 2048.pdf:application/pdf} -} - -@inproceedings{matsuzaki_systematic_2016, - title = {Systematic selection of {N}-tuple networks with consideration of interinfluence for game 2048}, - url = {https://doi.org/10.1109%2Ftaai.2016.7880154}, - doi = {10.1109/taai.2016.7880154}, - booktitle = {2016 {Conference} on {Technologies} and {Applications} of {Artificial} {Intelligence} ({TAAI})}, - publisher = {IEEE}, - author = {Matsuzaki, Kiminori}, - month = nov, - year = {2016}, - file = {Full Text:/home/nemo/Zotero/storage/LYN4IZ38/Matsuzaki - 2016 - Systematic selection of N-tuple networks with cons.pdf:application/pdf} -} - -@inproceedings{rodgers_investigation_2014, - title = {An investigation into 2048 {AI} strategies}, - url = {https://doi.org/10.1109%2Fcig.2014.6932920}, - doi = {10.1109/cig.2014.6932920}, - booktitle = {2014 {IEEE} {Conference} on {Computational} {Intelligence} and {Games}}, - publisher = {IEEE}, - author = {Rodgers, Philip and Levine, John}, - month = aug, - year = {2014}, - file = {Full Text:/home/nemo/Zotero/storage/GWVBHIAP/Rodgers and Levine - 2014 - An investigation into 2048 AI strategies.pdf:application/pdf} -} - -@article{anthony_learning_2020, - title = {Learning to {Play} {No}-{Press} {Diplomacy} with {Best} {Response} {Policy} {Iteration}}, - url = {http://arxiv.org/abs/2006.04635v2}, - journal = {arxiv:2006.04635}, - author = {Anthony, Thomas and Eccles, Tom and Tacchetti, Andrea and Kramár, János and Gemp, Ian and Hudson, Thomas C. and Porcel, Nicolas and Lanctot, Marc and Pérolat, Julien and Everett, Richard and Singh, Satinder and Graepel, Thore and Bachrach, Yoram}, - year = {2020}, - file = {Full Text:/home/nemo/Zotero/storage/RKH36CBQ/Anthony et al. - 2020 - Learning to Play No-Press Diplomacy with Best Resp.pdf:application/pdf} -} - -@article{paquette_no_2019, - title = {No {Press} {Diplomacy}: {Modeling} {Multi}-{Agent} {Gameplay}}, - url = {http://arxiv.org/abs/1909.02128v2}, - journal = {arxiv:1909.02128}, - author = {Paquette, Philip and Lu, Yuchen and Bocco, Steven and Smith, Max O. and Ortiz-Gagne, Satya and Kummerfeld, Jonathan K. and Singh, Satinder and Pineau, Joelle and Courville, Aaron}, - year = {2019}, - file = {Full Text:/home/nemo/Zotero/storage/YHUCJAG8/Paquette et al. - 2019 - No Press Diplomacy Modeling Multi-Agent Gameplay.pdf:application/pdf} -} - -@article{tan_agent_2019, - title = {Agent {Madoff}: {A} {Heuristic}-{Based} {Negotiation} {Agent} {For} {The} {Diplomacy} {Strategy} {Game}}, - url = {http://arxiv.org/abs/1902.06996v1}, - journal = {arxiv:1902.06996}, - author = {Tan, Hao Hao}, - year = {2019}, - file = {Full Text:/home/nemo/Zotero/storage/6Z6CSYSZ/Tan - 2019 - Agent Madoff A Heuristic-Based Negotiation Agent .pdf:application/pdf} -} - -@article{gedda_monte_2018-1, - title = {Monte {Carlo} {Methods} for the {Game} {Kingdomino}}, - url = {http://arxiv.org/abs/1807.04458v2}, - journal = {arxiv:1807.04458}, - author = {Gedda, Magnus and Lagerkvist, Mikael Z. and Butler, Martin}, - year = {2018}, - file = {Full Text:/home/nemo/Zotero/storage/T2BSPBPV/Gedda et al. - 2018 - Monte Carlo Methods for the Game Kingdomino.pdf:application/pdf} -} - -@article{nguyen_np-completeness_2019, - title = {{NP}-completeness of the game {Kingdomino}}, - url = {http://arxiv.org/abs/1909.02849v3}, - journal = {arxiv:1909.02849}, - author = {Nguyen, Viet-Ha and Perrot, Kevin and Vallet, Mathieu}, - year = {2019}, - file = {Full Text:/home/nemo/Zotero/storage/32L6ZKCA/Nguyen et al. - 2019 - NP-completeness of the game Kingdomino.pdf:application/pdf} -} - -@article{lerer_improving_2019, - title = {Improving {Policies} via {Search} in {Cooperative} {Partially} {Observable} {Games}}, - url = {http://arxiv.org/abs/1912.02318v1}, - journal = {arxiv:1912.02318}, - author = {Lerer, Adam and Hu, Hengyuan and Foerster, Jakob and Brown, Noam}, - year = {2019}, - file = {Full Text:/home/nemo/Zotero/storage/F2N99DK9/Lerer et al. - 2019 - Improving Policies via Search in Cooperative Parti.pdf:application/pdf} -} - -@article{baffier_hanabi_2016, - title = {Hanabi is {NP}-hard, {Even} for {Cheaters} who {Look} at {Their} {Cards}}, - url = {http://arxiv.org/abs/1603.01911v3}, - journal = {arxiv:1603.01911}, - author = {Baffier, Jean-Francois and Chiu, Man-Kwun and Diez, Yago and Korman, Matias and Mitsou, Valia and Renssen, André van and Roeloffzen, Marcel and Uno, Yushi}, - year = {2016}, - file = {Full Text:/home/nemo/Zotero/storage/XMPLK7RJ/Baffier et al. - 2016 - Hanabi is NP-hard, Even for Cheaters who Look at T.pdf:application/pdf} -} - -@article{canaan_generating_2020, - title = {Generating and {Adapting} to {Diverse} {Ad}-{Hoc} {Cooperation} {Agents} in {Hanabi}}, - url = {http://arxiv.org/abs/2004.13710v2}, - journal = {arxiv:2004.13710}, - author = {Canaan, Rodrigo and Gao, Xianbo and Togelius, Julian and Nealen, Andy and Menzel, Stefan}, - year = {2020}, - file = {Full Text:/home/nemo/Zotero/storage/PDZQXHYY/Canaan et al. - 2020 - Generating and Adapting to Diverse Ad-Hoc Cooperat.pdf:application/pdf} -} - -@article{canaan_evaluating_2020, - title = {Evaluating the {Rainbow} {DQN} {Agent} in {Hanabi} with {Unseen} {Partners}}, - url = {http://arxiv.org/abs/2004.13291v1}, - journal = {arxiv:2004.13291}, - author = {Canaan, Rodrigo and Gao, Xianbo and Chung, Youjin and Togelius, Julian and Nealen, Andy and Menzel, Stefan}, - year = {2020}, - file = {Full Text:/home/nemo/Zotero/storage/DEVP82UJ/Canaan et al. - 2020 - Evaluating the Rainbow DQN Agent in Hanabi with Un.pdf:application/pdf} -} - -@article{biderman_magic_2020, - title = {Magic: the {Gathering} is as {Hard} as {Arithmetic}}, - url = {http://arxiv.org/abs/2003.05119v1}, - journal = {arxiv:2003.05119}, - author = {Biderman, Stella}, - year = {2020}, - file = {Full Text:/home/nemo/Zotero/storage/N83MTIN9/Biderman - 2020 - Magic the Gathering is as Hard as Arithmetic.pdf:application/pdf} -} - -@article{churchill_magic_2019, - title = {Magic: {The} {Gathering} is {Turing} {Complete}}, - url = {http://arxiv.org/abs/1904.09828v2}, - journal = {arxiv:1904.09828}, - author = {Churchill, Alex and Biderman, Stella and Herrick, Austin}, - year = {2019}, - file = {Full Text:/home/nemo/Zotero/storage/5NW5WTWK/Churchill et al. - 2019 - Magic The Gathering is Turing Complete.pdf:application/pdf} -} - -@article{zilio_neural_2018, - title = {Neural {Networks} {Models} for {Analyzing} {Magic}: the {Gathering} {Cards}}, - url = {http://arxiv.org/abs/1810.03744v1}, - journal = {arxiv:1810.03744}, - author = {Zilio, Felipe and Prates, Marcelo}, - year = {2018}, - file = {Full Text:/home/nemo/Zotero/storage/VX32HLNF/Zilio et al. - 2018 - Neural Networks Models for Analyzing Magic the Ga.pdf:application/pdf} -} - -@inproceedings{grichshenko_using_2020, - title = {Using {Tabu} {Search} {Algorithm} for {Map} {Generation} in the {Terra} {Mystica} {Tabletop} {Game}}, - url = {https://doi.org/10.1145%2F3396474.3396492}, - doi = {10.1145/3396474.3396492}, - booktitle = {Proceedings of the 2020 4th {International} {Conference} on {Intelligent} {Systems}, {Metaheuristics} \& {Swarm} {Intelligence}}, - publisher = {ACM}, - author = {Grichshenko, Alexandr and Araújo, Luiz Jonatã Pires de and Gimaeva, Susanna and Brown, Joseph Alexander}, - month = mar, - year = {2020}, - file = {Submitted Version:/home/nemo/Zotero/storage/4LSZ3R5D/Grichshenko et al. - 2020 - Using Tabu Search Algorithm for Map Generation in .pdf:application/pdf} -} - -@article{migdal_mathematical_2010, - title = {A mathematical model of the {Mafia} game}, - url = {http://arxiv.org/abs/1009.1031v3}, - journal = {arxiv:1009.1031}, - author = {Migdał, Piotr}, - year = {2010}, - file = {Full Text:/home/nemo/Zotero/storage/RCJ7EPW7/Migdał - 2010 - A mathematical model of the Mafia game.pdf:application/pdf} -} - -@article{demaine_complexity_2010, - title = {The complexity of {UNO}}, - url = {http://arxiv.org/abs/1003.2851v3}, - journal = {arxiv:1003.2851}, - author = {Demaine, Erik D. and Demaine, Martin L. and Harvey, Nicholas J. A. and Uehara, Ryuhei and Uno, Takeaki and Uno, Yushi}, - year = {2010}, - file = {Full Text:/home/nemo/Zotero/storage/KNHHMQC3/Demaine et al. - 2010 - The complexity of UNO.pdf:application/pdf} -} - -@article{almanza_trainyard_2016, - title = {Trainyard is {NP}-{Hard}}, - url = {http://arxiv.org/abs/1603.00928v1}, - journal = {arxiv:1603.00928}, - author = {Almanza, Matteo and Leucci, Stefano and Panconesi, Alessandro}, - year = {2016}, - file = {Full Text:/home/nemo/Zotero/storage/6XZDBHIF/Almanza et al. - 2016 - Trainyard is NP-Hard.pdf:application/pdf} -} - -@article{langerman_threes_2015, - title = {Threes!, {Fives}, 1024!, and 2048 are {Hard}}, - url = {http://arxiv.org/abs/1505.04274v1}, - journal = {arxiv:1505.04274}, - author = {Langerman, Stefan and Uno, Yushi}, - year = {2015}, - file = {Full Text:/home/nemo/Zotero/storage/EKHK8LWW/Langerman and Uno - 2015 - Threes!, Fives, 1024!, and 2048 are Hard.pdf:application/pdf} -} - -@article{eppstein_making_2018, - title = {Making {Change} in 2048}, - url = {http://arxiv.org/abs/1804.07396v1}, - journal = {arxiv:1804.07396}, - author = {Eppstein, David}, - year = {2018}, - file = {Full Text:/home/nemo/Zotero/storage/MTEUWS7P/Eppstein - 2018 - Making Change in 2048.pdf:application/pdf} -} - -@article{das_analysis_2018, - title = {Analysis of the {Game} "2048" and its {Generalization} in {Higher} {Dimensions}}, - url = {http://arxiv.org/abs/1804.07393v2}, - journal = {arxiv:1804.07393}, - author = {Das, Madhuparna and Paul, Goutam}, - year = {2018}, - file = {Full Text:/home/nemo/Zotero/storage/IVPCDJKF/Das and Paul - 2018 - Analysis of the Game 2048 and its Generalization.pdf:application/pdf} -} - -@article{yeh_multi-stage_2016, - title = {Multi-{Stage} {Temporal} {Difference} {Learning} for 2048-like {Games}}, - url = {http://arxiv.org/abs/1606.07374v2}, - journal = {arxiv:1606.07374}, - author = {Yeh, Kun-Hao and Wu, I.-Chen and Hsueh, Chu-Hsuan and Chang, Chia-Chuan and Liang, Chao-Chin and Chiang, Han}, - year = {2016}, - file = {Full Text:/home/nemo/Zotero/storage/XYA7M7R4/Yeh et al. - 2016 - Multi-Stage Temporal Difference Learning for 2048-.pdf:application/pdf} -} - -@article{mehta_2048_2014, - title = {2048 is ({PSPACE}) {Hard}, but {Sometimes} {Easy}}, - url = {http://arxiv.org/abs/1408.6315v1}, - journal = {arxiv:1408.6315}, - author = {Mehta, Rahul}, - year = {2014}, - file = {Full Text:/home/nemo/Zotero/storage/TDMX7RFI/Mehta - 2014 - 2048 is (PSPACE) Hard, but Sometimes Easy.pdf:application/pdf} -} - -@misc{noauthor_settlers_nodate, - title = {Settlers of {Catan} bot trained using reinforcement learning}, - url = {https://jonzia.github.io/Catan/} -} - -@inproceedings{guhe_trading_2012, - title = {Trading in a multiplayer board game: {Towards} an analysis of non-cooperative dialogue}, - volume = {34}, - booktitle = {Proceedings of the {Annual} {Meeting} of the {Cognitive} {Science} {Society}}, - author = {Guhe, Markus and Lascarides, Alex}, - year = {2012}, - note = {Issue: 34}, - file = {Guhe and Lascarides - 2012 - Trading in a multiplayer board game Towards an an.pdf:/home/nemo/Zotero/storage/AT8UHTXM/Guhe and Lascarides - 2012 - Trading in a multiplayer board game Towards an an.pdf:application/pdf} -} - -@article{noauthor_pomcp_nodate, - title = {{POMCP} with {Human} {Preferencesin} {Settlers} of {Catan}}, - url = {https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217}, - file = {POMCP with Human Preferencesin Settlers of Catan.pdf:/home/nemo/Zotero/storage/CA62SLVK/POMCP with Human Preferencesin Settlers of Catan.pdf:application/pdf} -} - -@misc{noauthor_impact_nodate, - title = {The impact of loaded dice in {Catan}}, - url = {https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html} -} - -@article{noauthor_monte_nodate, - title = {Monte {Carlo} {Tree} {Search} in a {Modern} {Board} {Game} {Framework}}, - url = {https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf}, - file = {Full Text:/home/nemo/Zotero/storage/QJUD6RDZ/Monte Carlo Tree Search in a Modern Board Game Fra.pdf:application/pdf} -} - -@book{pfeiffer_reinforcement_2004, - title = {Reinforcement {Learning} of {Strategies} for {Settlers} of {Catan}}, - author = {Pfeiffer, Michael}, - year = {2004}, - file = {Pfeiffer - 2004 - Reinforcement Learning of Strategies for Settlers .pdf:/home/nemo/Zotero/storage/9KJ7QYK4/Pfeiffer - 2004 - Reinforcement Learning of Strategies for Settlers .pdf:application/pdf} -} - -@misc{noauthor_intelligent_nodate, - title = {An {Intelligent} {Artificial} {Player} for the {Game} of {Risk}}, - url = {http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf}, - file = {An Intelligent Artificial Player for the Game of R.pdf:/home/nemo/Zotero/storage/89MUCUE7/An Intelligent Artificial Player for the Game of R.pdf:application/pdf} -} - -@article{noauthor_risky_nodate, - title = {{RISKy} {Business}: {An} {In}-{Depth} {Look} at the {Game} {RISK}}, - url = {https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3}, - file = {RISKy Business An In-Depth Look at the Game RISK.pdf:/home/nemo/Zotero/storage/PT8CWUJ5/RISKy Business An In-Depth Look at the Game RISK.pdf:application/pdf} -} - -@article{noauthor_risk_nodate, - title = {{RISK} {Board} {Game} ‐ {Battle} {Outcome} {Analysis}}, - url = {http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf}, - file = {RISK Board Game ‐ Battle Outcome Analysis.pdf:/home/nemo/Zotero/storage/IJR85DGR/RISK Board Game ‐ Battle Outcome Analysis.pdf:application/pdf;Full Text:/home/nemo/Zotero/storage/WPYDQ5CF/RISK Board Game ‐ Battle Outcome Analysis.pdf:application/pdf} -} - -@book{olsson_multi-agent_2005, - title = {A multi-agent system for playing the board game risk}, - author = {Olsson, Fredrik}, - year = {2005} -} - -@misc{noauthor_state_nodate, - title = {State {Representation} and {Polyomino} {Placement} for the {Game} {Patchwork}}, - url = {https://zayenz.se/blog/post/patchwork-modref2019-paper/} -} - -@article{lagerkvist_state_2020, - title = {State {Representation} and {Polyomino} {Placement} for the {Game} {Patchwork}}, - url = {http://arxiv.org/abs/2001.04233}, - abstract = {Modern board games are a rich source of entertainment for many people, but also contain interesting and challenging structures for game playing research and implementing game playing agents. This paper studies the game Patchwork, a two player strategy game using polyomino tile drafting and placement. The core polyomino placement mechanic is implemented in a constraint model using regular constraints, extending and improving the model in (Lagerkvist, Pesant, 2008) with: explicit rotation handling; optional placements; and new constraints for resource usage. Crucial for implementing good game playing agents is to have great heuristics for guiding the search when faced with large branching factors. This paper divides placing tiles into two parts: a policy used for placing parts and an evaluation used to select among different placements. Policies are designed based on classical packing literature as well as common standard constraint programming heuristics. For evaluation, global propagation guided regret is introduced, choosing placements based on not ruling out later placements. Extensive evaluations are performed, showing the importance of using a good evaluation and that the proposed global propagation guided regret is a very effective guide.}, - urldate = {2020-07-21}, - journal = {arXiv:2001.04233 [cs]}, - author = {Lagerkvist, Mikael Zayenz}, - month = jan, - year = {2020}, - note = {arXiv: 2001.04233}, - keywords = {Computer Science - Artificial Intelligence}, - annote = {Code: https://github.com/zayenz/cp-mod-ref-2019-patchwork - }, - annote = {Comment: In ModRef 2019, The 18th workshop on Constraint Modelling and Reformulation}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/TEV9W4CI/Lagerkvist - 2020 - State Representation and Polyomino Placement for t.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/CWN9FKIC/2001.html:text/html} -} - -@misc{noauthor_state_nodate-1, - title = {State {Representation} and {Polyomino} {Placement} for the {Game} {Patchwork}}, - url = {https://zayenz.se/papers/Lagerkvist_ModRef_2019_Presentation.pdf}, - file = {Full Text:/home/nemo/Zotero/storage/JVLQG3BV/State Representation and Polyomino Placement for t.pdf:application/pdf} -} - -@article{lagerkvist_nmbr9_2020, - title = {Nmbr9 as a {Constraint} {Programming} {Challenge}}, - url = {http://arxiv.org/abs/2001.04238}, - abstract = {Modern board games are a rich source of interesting and new challenges for combinatorial problems. The game Nmbr9 is a solitaire style puzzle game using polyominoes. The rules of the game are simple to explain, but modelling the game effectively using constraint programming is hard. This abstract presents the game, contributes new generalized variants of the game suitable for benchmarking and testing, and describes a model for the presented variants. The question of the top possible score in the standard game is an open challenge.}, - urldate = {2020-07-21}, - journal = {arXiv:2001.04238 [cs]}, - author = {Lagerkvist, Mikael Zayenz}, - month = jan, - year = {2020}, - note = {arXiv: 2001.04238}, - keywords = {Computer Science - Artificial Intelligence}, - annote = {Code: https://github.com/zayenz/cp-2019-nmbr9/}, - annote = {Comment: Abstract at the 25th International Conference on Principles and Practice of Constraint Programming}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/6YRVYGL7/Lagerkvist - 2020 - Nmbr9 as a Constraint Programming Challenge.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/ZP8RUPEW/2001.html:text/html} -} - -@misc{noauthor_nmbr9_nodate, - title = {Nmbr9 as a {Constraint} {Programming} {Challenge}}, - url = {https://zayenz.se/blog/post/nmbr9-cp2019-abstract/} -} - -@inproceedings{goodman_re-determinizing_2019, - title = {Re-determinizing {MCTS} in {Hanabi}}, - doi = {10.1109/CIG.2019.8848097}, - author = {Goodman, James}, - year = {2019}, - pages = {1--8}, - file = {Goodman - 2019 - Re-determinizing MCTS in Hanabi.pdf:/home/nemo/Zotero/storage/MQ8AF9RF/Goodman - 2019 - Re-determinizing MCTS in Hanabi.pdf:application/pdf} -} - -@inproceedings{canaan_evolving_2018, - address = {Maastricht}, - title = {Evolving {Agents} for the {Hanabi} 2018 {CIG} {Competition}}, - isbn = {978-1-5386-4359-4}, - url = {https://ieeexplore.ieee.org/document/8490449/}, - doi = {10.1109/CIG.2018.8490449}, - urldate = {2020-07-21}, - booktitle = {2018 {IEEE} {Conference} on {Computational} {Intelligence} and {Games} ({CIG})}, - publisher = {IEEE}, - author = {Canaan, Rodrigo and Shen, Haotian and Torrado, Ruben and Togelius, Julian and Nealen, Andy and Menzel, Stefan}, - month = aug, - year = {2018}, - pages = {1--8}, - file = {Submitted Version:/home/nemo/Zotero/storage/XP6SKHQI/Canaan et al. - 2018 - Evolving Agents for the Hanabi 2018 CIG Competitio.pdf:application/pdf} -} - -@incollection{bosse_aspects_2017, - address = {Cham}, - title = {Aspects of the {Cooperative} {Card} {Game} {Hanabi}}, - volume = {765}, - isbn = {978-3-319-67467-4 978-3-319-67468-1}, - url = {http://link.springer.com/10.1007/978-3-319-67468-1_7}, - urldate = {2020-07-21}, - booktitle = {{BNAIC} 2016: {Artificial} {Intelligence}}, - publisher = {Springer International Publishing}, - author = {van den Bergh, Mark J. H. and Hommelberg, Anne and Kosters, Walter A. and Spieksma, Flora M.}, - editor = {Bosse, Tibor and Bredeweg, Bert}, - year = {2017}, - doi = {10.1007/978-3-319-67468-1_7}, - note = {Series Title: Communications in Computer and Information Science}, - pages = {93--105}, - file = {Full Text:/home/nemo/Zotero/storage/6TLZ7TUH/van den Bergh et al. - 2017 - Aspects of the Cooperative Card Game Hanabi.pdf:application/pdf} -} - -@incollection{winands_playing_2017, - address = {Cham}, - title = {Playing {Hanabi} {Near}-{Optimally}}, - volume = {10664}, - isbn = {978-3-319-71648-0 978-3-319-71649-7}, - url = {http://link.springer.com/10.1007/978-3-319-71649-7_5}, - urldate = {2020-07-21}, - booktitle = {Advances in {Computer} {Games}}, - publisher = {Springer International Publishing}, - author = {Bouzy, Bruno}, - editor = {Winands, Mark H.M. and van den Herik, H. Jaap and Kosters, Walter A.}, - year = {2017}, - doi = {10.1007/978-3-319-71649-7_5}, - note = {Series Title: Lecture Notes in Computer Science}, - pages = {51--62} -} - -@inproceedings{eger_intentional_2017, - address = {New York, NY, USA}, - title = {An intentional {AI} for hanabi}, - isbn = {978-1-5386-3233-8}, - url = {http://ieeexplore.ieee.org/document/8080417/}, - doi = {10.1109/CIG.2017.8080417}, - urldate = {2020-07-21}, - booktitle = {2017 {IEEE} {Conference} on {Computational} {Intelligence} and {Games} ({CIG})}, - publisher = {IEEE}, - author = {Eger, Markus and Martens, Chris and Cordoba, Marcela Alfaro}, - month = aug, - year = {2017}, - pages = {68--75}, - file = {Full Text:/home/nemo/Zotero/storage/E3H565Y9/Eger et al. - 2017 - An intentional AI for hanabi.pdf:application/pdf} -} - -@inproceedings{osawa_solving_2015, - title = {Solving {Hanabi}: {Estimating} {Hands} by {Opponent}'s {Actions} in {Cooperative} {Game} with {Incomplete} {Information}}, - url = {https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167}, - abstract = {A unique behavior of humans is modifying one’s unobservable behavior based on the reaction of others for cooperation. We used a card game called Hanabi as an evaluation task of imitating human reflective intelligence with artificial intelligence. Hanabi is a cooperative card game with incomplete information. A player cooperates with an opponent in building several card sets constructed with the same color and ordered numbers. However, like a blind man's bluff, each player sees the cards of all other players except his/her own. Also, communication between players is restricted to information about the same numbers and colors, and the player is required to read his/his opponent's intention with the opponent's hand, estimate his/her cards with incomplete information, and play one of them for building a set. We compared human play with several simulated strategies. The results indicate that the strategy with feedbacks from simulated opponent's viewpoints achieves more score than other strategies.}, - author = {Osawa, Hirotaka}, - year = {2015}, - file = {Osawa - 2015 - Solving Hanabi Estimating Hands by Opponent's Act.pdf:/home/nemo/Zotero/storage/7TRVJGUC/Osawa - 2015 - Solving Hanabi Estimating Hands by Opponent's Act.pdf:application/pdf} -} - -@article{eger_browser-based_2017, - title = {A {Browser}-based {Interface} for the {Exploration} and {Evaluation} of {Hanabi} {AIs}}, - url = {http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf}, - language = {en}, - journal = {Cape Cod}, - author = {Eger, Markus and Martens, Chris}, - year = {2017}, - pages = {4}, - annote = {URL: http://fdg2017.org/papers/FDG2017\_demo\_Hanabi.pdf}, - file = {Eger and Martens - 2017 - A Browser-based Interface for the Exploration and .pdf:/home/nemo/Zotero/storage/RE7PCTMZ/Eger and Martens - 2017 - A Browser-based Interface for the Exploration and .pdf:application/pdf} -} - -@article{gottwald_i_nodate, - title = {I see what you see: {Integrating} eye tracking into {Hanabi} playing agents}, - abstract = {Humans’ eye movements convey a lot of information about their intentions, often unconsciously. Intelligent agents that cooperate with humans in various domains can benefit from interpreting this information. This paper contains a preliminary look at how eye tracking could be useful for agents that play the cooperative card game Hanabi with human players. We outline several situations in which an AI agent can utilize gaze information, and present an outlook on how we plan to integrate this with reimplementations of contemporary Hanabi agents.}, - language = {en}, - author = {Gottwald, Eva Tallula and Eger, Markus and Martens, Chris}, - pages = {4}, - annote = {URL: http://www.exag.org/wp-content/uploads/2018/10/AIIDE-18\_Upload\_112.pdf - }, - file = {Gottwald et al. - I see what you see Integrating eye tracking into .pdf:/home/nemo/Zotero/storage/5STNIF33/Gottwald et al. - I see what you see Integrating eye tracking into .pdf:application/pdf} -} - -@misc{noauthor_state_nodate-2, - title = {State of the art {Hanabi} bots + simulation framework in rust}, - url = {https://github.com/WuTheFWasThat/hanabi.rs} -} - -@misc{noauthor_strategy_nodate, - title = {A strategy simulator for the well-known cooperative card game {Hanabi}}, - url = {https://github.com/rjtobin/HanSim} -} - -@misc{noauthor_framework_nodate, - title = {A framework for writing bots that play {Hanabi}}, - url = {https://github.com/Quuxplusone/Hanabi} -} - -@article{dehaan_jidoukan_2020, - series = {Ludic {Language} {Pedagogy}}, - title = {Jidoukan {Jenga}: {Teaching} {English} through remixing games and game rules}, - shorttitle = {Teaching {English} through remixing games and game rules}, - url = {https://www.llpjournal.org/2020/04/13/jidokan-jenga.html}, - abstract = {Let students play simple games in their L1. It’s ok! - - Then: - - You, the teacher, can help them critique the game in their L2. - You, the teacher, can help them change the game in their L2. - You, the teacher, can help them develop themselves. - - \#dropthestick \#dropthecarrot \#bringmeaning}, - journal = {Ludic Language Pedagogy}, - author = {deHaan, Jonathan}, - month = apr, - year = {2020}, - note = {📍 What is this? This is a recollection of a short lesson with some children. I used Jenga and a dictionary. - 📍 Why did you make it? I want to show language teachers that simple games, and playing simple games in students’ first language can be a great foundation for helping students learn new vocabulary, think critically, and exercise creativity. - 📍 Why is it radical? I taught using a simple board game (at a time when video games are over-focused on in research). I show what the learning looks like (I include a photo). The teaching and learning didn’t occur in a laboratory setting, but in the wild (in a community center). I focused on the learning around games. - 📍 Who is it for? Language teachers can easily implement this lesson using Jenga or any other game. Language researchers can expand on the translating and remixing potential around games.}, - file = {deHaan - 2020 - Jidoukan Jenga Teaching English through remixing .pdf:/home/nemo/Zotero/storage/9B6YJUWQ/deHaan - 2020 - Jidoukan Jenga Teaching English through remixing .pdf:application/pdf} -} - -@article{heron_meeple_2018, - title = {Meeple {Centred} {Design}: {A} {Heuristic} {Toolkit} for {Evaluating} the {Accessibility} of {Tabletop} {Games}}, - volume = {7}, - issn = {2052-773X}, - shorttitle = {Meeple {Centred} {Design}}, - url = {http://link.springer.com/10.1007/s40869-018-0057-8}, - doi = {10.1007/s40869-018-0057-8}, - language = {en}, - number = {2}, - urldate = {2020-07-28}, - journal = {The Computer Games Journal}, - author = {Heron, Michael James and Belford, Pauline Helen and Reid, Hayley and Crabb, Michael}, - month = jun, - year = {2018}, - pages = {97--114}, - file = {Full Text:/home/nemo/Zotero/storage/A6WJQYW2/Heron et al. - 2018 - Meeple Centred Design A Heuristic Toolkit for Eva.pdf:application/pdf} -} - -@article{heron_eighteen_2018, - title = {Eighteen {Months} of {Meeple} {Like} {Us}: {An} {Exploration} into the {State} of {Board} {Game} {Accessibility}}, - volume = {7}, - issn = {2052-773X}, - shorttitle = {Eighteen {Months} of {Meeple} {Like} {Us}}, - url = {http://link.springer.com/10.1007/s40869-018-0056-9}, - doi = {10.1007/s40869-018-0056-9}, - language = {en}, - number = {2}, - urldate = {2020-07-28}, - journal = {The Computer Games Journal}, - author = {Heron, Michael James and Belford, Pauline Helen and Reid, Hayley and Crabb, Michael}, - month = jun, - year = {2018}, - pages = {75--95}, - file = {Full Text:/home/nemo/Zotero/storage/B3NFVIMW/Heron et al. - 2018 - Eighteen Months of Meeple Like Us An Exploration .pdf:application/pdf} -} - -@phdthesis{andel_complexity_2020, - type = {Bachelor thesis}, - title = {On the complexity of {Hive}}, - shorttitle = {On the complexity of {Hive}}, - url = {https://dspace.library.uu.nl/handle/1874/396955}, - abstract = {It is shown that for an arbitrary position of a Hive game where both players have the same set of N pieces it is PSPACE-hard to determine whether one of the players has a winning strategy. The proof is done by reducing the known PSPACE-complete set of true quantified boolean formulas to a game concerning these formulas, then to the game generalised geography, then to a version of that game with the restriction of having only nodes with maximum degree 3, and finally to generalised Hive. This thesis includes a short introduction to the subject of computational complexity.}, - language = {en-US}, - school = {Utrecht University}, - author = {Andel, Daniël}, - month = may, - year = {2020}, - file = {Andel - 2020 - On the complexity of Hive.pdf:/home/nemo/Zotero/storage/5TWTM295/Andel - 2020 - On the complexity of Hive.pdf:application/pdf} -} - -@article{kunda_creative_2020, - title = {Creative {Captioning}: {An} {AI} {Grand} {Challenge} {Based} on the {Dixit} {Board} {Game}}, - shorttitle = {Creative {Captioning}}, - url = {http://arxiv.org/abs/2010.00048}, - abstract = {We propose a new class of "grand challenge" AI problems that we call creative captioning---generating clever, interesting, or abstract captions for images, as well as understanding such captions. Creative captioning draws on core AI research areas of vision, natural language processing, narrative reasoning, and social reasoning, and across all these areas, it requires sophisticated uses of common sense and cultural knowledge. In this paper, we analyze several specific research problems that fall under creative captioning, using the popular board game Dixit as both inspiration and proposed testing ground. We expect that Dixit could serve as an engaging and motivating benchmark for creative captioning across numerous AI research communities for the coming 1-2 decades.}, - urldate = {2020-10-12}, - journal = {arXiv:2010.00048 [cs]}, - author = {Kunda, Maithilee and Rabkina, Irina}, - month = sep, - year = {2020}, - note = {arXiv: 2010.00048}, - keywords = {Computer Science - Artificial Intelligence}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/8VJ5WNFQ/Kunda and Rabkina - 2020 - Creative Captioning An AI Grand Challenge Based o.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/Y9MEAFJC/2010.html:text/html} -} - -@misc{noauthor_shobu_nodate, - title = {Shobu {AI} {Playground}}, - url = {https://github.com/JayWalker512/Shobu} -} - -@misc{noauthor_shobu_nodate-1, - title = {Shobu randomly played games dataset}, - url = {https://www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k} -} - -@inproceedings{de_mesentier_silva_ai-based_2017, - address = {Hyannis, Massachusetts}, - title = {{AI}-based playtesting of contemporary board games}, - isbn = {978-1-4503-5319-9}, - url = {http://dl.acm.org/citation.cfm?doid=3102071.3102105}, - doi = {10.1145/3102071.3102105}, - language = {en}, - urldate = {2020-10-12}, - booktitle = {Proceedings of the {International} {Conference} on the {Foundations} of {Digital} {Games} - {FDG} '17}, - publisher = {ACM Press}, - author = {de Mesentier Silva, Fernando and Lee, Scott and Togelius, Julian and Nealen, Andy}, - year = {2017}, - pages = {1--10}, - file = {Full Text:/home/nemo/Zotero/storage/BYYCGVG7/de Mesentier Silva et al. - 2017 - AI-based playtesting of contemporary board games.pdf:application/pdf} -} - -@misc{copley_materials_nodate, - title = {Materials for {Ticket} to {Ride} {Seattle} and a framework for making more game boards}, - url = {https://github.com/dovinmu/ttr_generator}, - author = {Copley, Rowan} -} - -@techreport{nguyen_httpswwweecstuftsedujsinapovteachingcomp150_rlreportsnguyen_dinjian_reportpdf_nodate, - title = {https://www.eecs.tufts.edu/{\textasciitilde}jsinapov/teaching/comp150\_RL/reports/{Nguyen}\_Dinjian\_report.pdf}, - url = {https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf}, - abstract = {Ticket to Ride is a very popular, award-winning board-game where you try toscore the most points while building a railway spanning cities in America. For acomputer to learn to play this game is very difficult due to the vast state-actionspace. This project will explain why featurizing your state, and implementingcurriculum learning can help agents learn as state-action spaces grow too largefor traditional learning methods to be effective.}, - author = {Nguyen, Cuong and Dinjian, Daniel} -} - -@inproceedings{de_mesentier_silva_evolving_2018, - address = {Malmö Sweden}, - title = {Evolving maps and decks for ticket to ride}, - isbn = {978-1-4503-6571-0}, - url = {https://dl.acm.org/doi/10.1145/3235765.3235813}, - doi = {10.1145/3235765.3235813}, - language = {en}, - urldate = {2020-10-12}, - booktitle = {Proceedings of the 13th {International} {Conference} on the {Foundations} of {Digital} {Games}}, - publisher = {ACM}, - author = {de Mesentier Silva, Fernando and Lee, Scott and Togelius, Julian and Nealen, Andy}, - month = aug, - year = {2018}, - pages = {1--7}, - file = {Full Text:/home/nemo/Zotero/storage/LRU3P3CX/de Mesentier Silva et al. - 2018 - Evolving maps and decks for ticket to ride.pdf:application/pdf} -} - -@misc{witter_applications_nodate, - title = {Applications of {Graph} {Theory} {andProbability} in the {Board} {GameTicket} {toRide}}, - url = {https://www.rtealwitter.com/slides/2020-JMM.pdf}, - author = {Witter, R. Teal and Lyford, Alex} -} - -@article{gendre_playing_2020, - title = {Playing {Catan} with {Cross}-dimensional {Neural} {Network}}, - url = {http://arxiv.org/abs/2008.07079}, - abstract = {Catan is a strategic board game having interesting properties, including multi-player, imperfect information, stochastic, complex state space structure (hexagonal board where each vertex, edge and face has its own features, cards for each player, etc), and a large action space (including negotiation). Therefore, it is challenging to build AI agents by Reinforcement Learning (RL for short), without domain knowledge nor heuristics. In this paper, we introduce cross-dimensional neural networks to handle a mixture of information sources and a wide variety of outputs, and empirically demonstrate that the network dramatically improves RL in Catan. We also show that, for the first time, a RL agent can outperform jsettler, the best heuristic agent available.}, - urldate = {2020-10-12}, - journal = {arXiv:2008.07079 [cs, stat]}, - author = {Gendre, Quentin and Kaneko, Tomoyuki}, - month = aug, - year = {2020}, - note = {arXiv: 2008.07079}, - keywords = {Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Statistics - Machine Learning}, - annote = {Comment: 12 pages, 5 tables and 10 figures; submitted to the ICONIP 2020}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/AU6NYDIV/Gendre and Kaneko - 2020 - Playing Catan with Cross-dimensional Neural Networ.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/NKRW6UKC/2008.html:text/html} -} - -@inproceedings{theodoridis_monte_2020, - address = {Athens Greece}, - title = {Monte {Carlo} {Tree} {Search} for the {Game} of {Diplomacy}}, - isbn = {978-1-4503-8878-8}, - url = {https://dl.acm.org/doi/10.1145/3411408.3411413}, - doi = {10.1145/3411408.3411413}, - language = {en}, - urldate = {2020-10-12}, - booktitle = {11th {Hellenic} {Conference} on {Artificial} {Intelligence}}, - publisher = {ACM}, - author = {Theodoridis, Alexios and Chalkiadakis, Georgios}, - month = sep, - year = {2020}, - pages = {16--25} -} - -@article{eger_operationalizing_2020, - title = {Operationalizing {Intentionality} to {Play} {Hanabi} with {Human} {Players}}, - issn = {2475-1502, 2475-1510}, - url = {https://ieeexplore.ieee.org/document/9140404/}, - doi = {10.1109/TG.2020.3009359}, - urldate = {2020-11-26}, - journal = {IEEE Transactions on Games}, - author = {Eger, Markus and Martens, Chris and Sauma Chacon, Pablo and Alfaro Cordoba, Marcela and Hidalgo Cespedes, Jeisson}, - year = {2020}, - pages = {1--1}, - file = {Full Text:/home/nemo/Zotero/storage/V2M3QSJG/Eger et al. - 2020 - Operationalizing Intentionality to Play Hanabi wit.pdf:application/pdf} -} - -@article{canaan_behavioral_2020, - title = {Behavioral {Evaluation} of {Hanabi} {Rainbow} {DQN} {Agents} and {Rule}-{Based} {Agents}}, - volume = {16}, - url = {https://ojs.aaai.org/index.php/AIIDE/article/view/7404}, - abstract = {\<p class=\"abstract\"\>Hanabi is a multiplayer cooperative card game, where only your partners know your cards. All players succeed or fail together. This makes the game an excellent testbed for studying collaboration. Recently, it has been shown that deep neural networks can be trained through self-play to play the game very well. However, such agents generally do not play well with others. In this paper, we investigate the consequences of training Rainbow DQN agents with human-inspired rule-based agents. We analyze with which agents Rainbow agents learn to play well, and how well playing skill transfers to agents they were not trained with. We also analyze patterns of communication between agents to elucidate how collaboration happens. A key finding is that while most agents only learn to play well with partners seen during training, one particular agent leads the Rainbow algorithm towards a much more general policy. The metrics and hypotheses advanced in this paper can be used for further study of collaborative agents.\</p\>}, - number = {1}, - urldate = {2020-11-26}, - journal = {Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment}, - author = {Canaan, Rodrigo and Gao, Xianbo and Chung, Youjin and Togelius, Julian and Nealen, Andy and Menzel, Stefan}, - month = oct, - year = {2020}, - note = {Section: Full Oral Papers}, - pages = {31--37} -} - -@inproceedings{_playing_2020, - title = {Playing mini-{Hanabi} card game with {Q}-learning}, - volume = {2020}, - url = {http://id.nii.ac.jp/1001/00205046/}, - booktitle = {第82回全国大会講演論文集}, - author = {ひい, とう and 市来, 正裕 and 中里, 研一}, - month = feb, - year = {2020}, - note = {Issue: 1}, - pages = {41--42} -} - -@article{reinhardt_competing_2020, - title = {Competing in a {Complex} {Hidden} {Role} {Game} with {Information} {Set} {Monte} {Carlo} {Tree} {Search}}, - url = {http://arxiv.org/abs/2005.07156}, - abstract = {Advances in intelligent game playing agents have led to successes in perfect information games like Go and imperfect information games like Poker. The Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms outperforms previous algorithms using Monte Carlo methods in imperfect information games. In this paper, Single Observer Information Set Monte Carlo Tree Search (SO-ISMCTS) is applied to Secret Hitler, a popular social deduction board game that combines traditional hidden role mechanics with the randomness of a card deck. This combination leads to a more complex information model than the hidden role and card deck mechanics alone. It is shown in 10108 simulated games that SO-ISMCTS plays as well as simpler rule based agents, and demonstrates the potential of ISMCTS algorithms in complicated information set domains.}, - urldate = {2020-11-26}, - journal = {arXiv:2005.07156 [cs]}, - author = {Reinhardt, Jack}, - month = may, - year = {2020}, - note = {arXiv: 2005.07156}, - keywords = {Computer Science - Artificial Intelligence, Computer Science - Multiagent Systems}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/D7TPSJ4Q/Reinhardt - 2020 - Competing in a Complex Hidden Role Game with Infor.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/TZ64EN6T/2005.html:text/html} -} - -@article{ameneyro_playing_2020, - title = {Playing {Carcassonne} with {Monte} {Carlo} {Tree} {Search}}, - url = {http://arxiv.org/abs/2009.12974}, - abstract = {Monte Carlo Tree Search (MCTS) is a relatively new sampling method with multiple variants in the literature. They can be applied to a wide variety of challenging domains including board games, video games, and energy-based problems to mention a few. In this work, we explore the use of the vanilla MCTS and the MCTS with Rapid Action Value Estimation (MCTS-RAVE) in the game of Carcassonne, a stochastic game with a deceptive scoring system where limited research has been conducted. We compare the strengths of the MCTS-based methods with the Star2.5 algorithm, previously reported to yield competitive results in the game of Carcassonne when a domain-specific heuristic is used to evaluate the game states. We analyse the particularities of the strategies adopted by the algorithms when they share a common reward system. The MCTS-based methods consistently outperformed the Star2.5 algorithm given their ability to find and follow long-term strategies, with the vanilla MCTS exhibiting a more robust game-play than the MCTS-RAVE.}, - urldate = {2021-01-02}, - journal = {arXiv:2009.12974 [cs]}, - author = {Ameneyro, Fred Valdez and Galvan, Edgar and Morales, Anger Fernando Kuri}, - month = oct, - year = {2020}, - note = {arXiv: 2009.12974}, - keywords = {Computer Science - Artificial Intelligence}, - annote = {Comment: 8 pages, 6 figures}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/KWUZF6UF/Ameneyro et al. - 2020 - Playing Carcassonne with Monte Carlo Tree Search.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/KGFBVHU7/2009.html:text/html} -} - -@article{tanaka_quixo_2020, - title = {Quixo {Is} {Solved}}, - url = {http://arxiv.org/abs/2007.15895}, - abstract = {Quixo is a two-player game played on a 5\${\textbackslash}times\$5 grid where the players try to align five identical symbols. Specifics of the game require the usage of novel techniques. Using a combination of value iteration and backward induction, we propose the first complete analysis of the game. We describe memory-efficient data structures and algorithmic optimizations that make the game solvable within reasonable time and space constraints. Our main conclusion is that Quixo is a Draw game. The paper also contains the analysis of smaller boards and presents some interesting states extracted from our computations.}, - urldate = {2021-01-02}, - journal = {arXiv:2007.15895 [cs]}, - author = {Tanaka, Satoshi and Bonnet, François and Tixeuil, Sébastien and Tamura, Yasumasa}, - month = jul, - year = {2020}, - note = {arXiv: 2007.15895}, - keywords = {Computer Science - Computer Science and Game Theory}, - annote = {Comment: 19 pages}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/ENGW8PNA/Tanaka et al. - 2020 - Quixo Is Solved.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/YZIUUDN9/2007.html:text/html} -} - -@article{bertholon_at_2020, - title = {At {Most} 43 {Moves}, {At} {Least} 29: {Optimal} {Strategies} and {Bounds} for {Ultimate} {Tic}-{Tac}-{Toe}}, - shorttitle = {At {Most} 43 {Moves}, {At} {Least} 29}, - url = {http://arxiv.org/abs/2006.02353}, - abstract = {Ultimate Tic-Tac-Toe is a variant of the well known tic-tac-toe (noughts and crosses) board game. Two players compete to win three aligned "fields", each of them being a tic-tac-toe game. Each move determines which field the next player must play in. We show that there exist a winning strategy for the first player, and therefore that there exist an optimal winning strategy taking at most 43 moves; that the second player can hold on at least 29 rounds; and identify any optimal strategy's first two moves.}, - urldate = {2021-01-02}, - journal = {arXiv:2006.02353 [cs]}, - author = {Bertholon, Guillaume and Géraud-Stewart, Rémi and Kugelmann, Axel and Lenoir, Théo and Naccache, David}, - month = jun, - year = {2020}, - note = {arXiv: 2006.02353}, - keywords = {Computer Science - Computer Science and Game Theory}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/USYULUK5/Bertholon et al. - 2020 - At Most 43 Moves, At Least 29 Optimal Strategies .pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/FWCEA7V4/2006.html:text/html} -} - -@article{muller-brockhausen_new_2020, - title = {A {New} {Challenge}: {Approaching} {Tetris} {Link} with {AI}}, - shorttitle = {A {New} {Challenge}}, - url = {http://arxiv.org/abs/2004.00377}, - abstract = {Decades of research have been invested in making computer programs for playing games such as Chess and Go. This paper focuses on a new game, Tetris Link, a board game that is still lacking any scientific analysis. Tetris Link has a large branching factor, hampering a traditional heuristic planning approach. We explore heuristic planning and two other approaches: Reinforcement Learning, Monte Carlo tree search. We document our approach and report on their relative performance in a tournament. Curiously, the heuristic approach is stronger than the planning/learning approaches. However, experienced human players easily win the majority of the matches against the heuristic planning AIs. We, therefore, surmise that Tetris Link is more difficult than expected. We offer our findings to the community as a challenge to improve upon.}, - urldate = {2021-01-02}, - journal = {arXiv:2004.00377 [cs]}, - author = {Muller-Brockhausen, Matthias and Preuss, Mike and Plaat, Aske}, - month = apr, - year = {2020}, - note = {arXiv: 2004.00377}, - keywords = {Computer Science - Artificial Intelligence}, - file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/CJNXCN3A/Muller-Brockhausen et al. - 2020 - A New Challenge Approaching Tetris Link with AI.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/4NNBCTUY/2004.html:text/html} -} diff --git a/boardgame-research.rdf b/boardgame-research.rdf index 6bcab13..4a3898b 100644 --- a/boardgame-research.rdf +++ b/boardgame-research.rdf @@ -4,87 +4,10 @@ xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:bib="http://purl.org/net/biblio#" - xmlns:vcard="http://nwalsh.com/rdf/vCard#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:link="http://purl.org/rss/1.0/modules/link/" - xmlns:prism="http://prismstandard.org/namespaces/1.2/basic/"> - - conferencePaper - - - ISBN 978-1-4799-3547-5 - 2014 IEEE Conference on Computational Intelligence and Games - DOI 10.1109/CIG.2014.6932861 - - - - - - - Dortmund, Germany - - - IEEE - - - - - - - Guhe - Markus - - - - - Lascarides - Alex - - - - - - - The effectiveness of persuasion in The Settlers of Catan - 8/2014 - DOI.org (Crossref) - - - http://ieeexplore.ieee.org/document/6932861/ - - - 2020-07-20 18:00:29 - 1-8 - - - 2014 IEEE Conference on Computational Intelligence and Games (CIG) - - - - - attachment - Submitted Version - - - https://www.pure.ed.ac.uk/ws/files/19353900/CIG2014.pdf - - - 2020-07-20 18:01:02 - 1 - application/pdf - - - attachment - Submitted Version - - - https://www.pure.ed.ac.uk/ws/files/19353900/CIG2014.pdf - - - 2020-07-20 18:01:16 - 1 - application/pdf - + xmlns:prism="http://prismstandard.org/namespaces/1.2/basic/" + xmlns:vcard="http://nwalsh.com/rdf/vCard#"> conferencePaper @@ -2944,7 +2867,7 @@ MATLAB - + conferencePaper @@ -2971,6 +2894,11 @@ Trading in a multiplayer board game: Towards an analysis of non-cooperative dialogue 2012 + + + https://escholarship.org/uc/item/9zt506xx + + Issue: 34 @@ -3033,8 +2961,11 @@ 1 application/pdf - - book + + conferencePaper + + + @@ -3048,7 +2979,12 @@ Reinforcement Learning of Strategies for Settlers of Catan 2004 - + + + https://www.researchgate.net/publication/228728063_Reinforcement_learning_of_strategies_for_Settlers_of_Catan + + + attachment Pfeiffer - 2004 - Reinforcement Learning of Strategies for Settlers .pdf @@ -3056,8 +2992,18 @@ presentation + + + + + Michael Wolf + + + + An Intelligent Artificial Player for the Game of Risk + 20/04/2005 http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf @@ -3092,7 +3038,6 @@ - RISK Board Game ‐ Battle Outcome Analysis @@ -3101,11 +3046,6 @@ - - attachment - RISK Board Game ‐ Battle Outcome Analysis.pdf - application/pdf - attachment Full Text @@ -3118,8 +3058,13 @@ 1 application/pdf - - book + + thesis + + + Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering. + + @@ -3130,9 +3075,42 @@ + + A multi-agent system for playing the board game risk + Risk is a game in which traditional Artificial-Intelligence methods such as for example iterative deepening and Alpha-Beta pruning can not successfully be applied due to the size of the search space. Distributed problem solving in the form of a multi-agent system might be the solution. This needs to be tested before it is possible to tell if a multi-agent system will be successful at playing Risk or not. In this thesis the development of a multi-agent system that plays Risk is explained. The system places an agent in every country on the board and uses a central agent for organizing communication. An auction mechanism is used for negotiation. The experiments show that a multi-agent solution indeed is a prosperous approach when developing a computer based player for the board game Risk. 2005 - + + + http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3781 + + + 51 + Independent thesis Advanced level (degree of Master (One Year)) + + + attachment + Full Text + + + http://bth.diva-portal.org/smash/get/diva2:831093/FULLTEXT01 + + + 2021-07-24 08:26:48 + 1 + application/pdf + + + attachment + Full Text + + + https://www.diva-portal.org/smash/get/diva2:831093/FULLTEXT01.pdf + + + 2021-07-24 08:28:25 + 3 + blogPost @@ -3160,8 +3138,6 @@ - - @@ -3181,13 +3157,6 @@ 2020-07-21 10:55:58 arXiv: 2001.04233 - - <p>Code: <a href="https://github.com/zayenz/cp-mod-ref-2019-patchwork">https://github.com/zayenz/cp-mod-ref-2019-patchwork</a></p> -<p> </p> - - - <p>Comment: In ModRef 2019, The 18th workshop on Constraint Modelling and Reformulation</p> - attachment arXiv Fulltext PDF @@ -3249,8 +3218,6 @@ - - @@ -3270,12 +3237,6 @@ 2020-07-21 10:57:58 arXiv: 2001.04238 - - <p>Code: https://github.com/zayenz/cp-2019-nmbr9/</p> - - - <p>Comment: Abstract at the 25th International Conference on Principles and Practice of Constraint Programming</p> - attachment arXiv Fulltext PDF @@ -3704,7 +3665,6 @@ DOI: 10.1007/978-3-319-71649-7_5 - A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs 2017 @@ -3717,9 +3677,6 @@ DOI: 10.1007/978-3-319-71649-7_5 4 - - <p>URL: http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf</p> - attachment Eger and Martens - 2017 - A Browser-based Interface for the Exploration and .pdf @@ -3752,7 +3709,6 @@ DOI: 10.1007/978-3-319-71649-7_5 - I see what you see: Integrating eye tracking into Hanabi playing agents Humans’ eye movements convey a lot of information about their intentions, often unconsciously. Intelligent agents that cooperate with humans in various domains can benefit from interpreting this information. This paper contains a preliminary look at how eye tracking could be useful for agents that play the cooperative card game Hanabi with human players. We outline several situations in which an AI agent can utilize gaze information, and present an outlook on how we plan to integrate this with reimplementations of contemporary Hanabi agents. @@ -3760,10 +3716,6 @@ DOI: 10.1007/978-3-319-71649-7_5 Zotero 4 - - <p>URL: <a href="http://www.exag.org/wp-content/uploads/2018/10/AIIDE-18_Upload_112.pdf">http://www.exag.org/wp-content/uploads/2018/10/AIIDE-18_Upload_112.pdf</a></p> -<p> </p> - attachment Gottwald et al. - I see what you see Integrating eye tracking into .pdf @@ -4225,7 +4177,8 @@ DOI: 10.1007/978-3-319-71649-7_5 - https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf + + The Difficulty of Learning Ticket to Ride Ticket to Ride is a very popular, award-winning board-game where you try toscore the most points while building a railway spanning cities in America. For acomputer to learn to play this game is very difficult due to the vast state-actionspace. This project will explain why featurizing your state, and implementingcurriculum learning can help agents learn as state-action spaces grow too largefor traditional learning methods to be effective. @@ -4233,6 +4186,18 @@ DOI: 10.1007/978-3-319-71649-7_5 + + attachment + Full Text + + + https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf + + + 2021-07-24 08:19:13 + 1 + application/pdf + conferencePaper @@ -4310,31 +4275,6 @@ DOI: 10.1007/978-3-319-71649-7_5 1 application/pdf - - presentation - - - - - Witter - R. Teal - - - - - Lyford - Alex - - - - - Applications of Graph Theory andProbability in the Board GameTicket toRide - - - https://www.rtealwitter.com/slides/2020-JMM.pdf - - - journalArticle @@ -4358,7 +4298,6 @@ DOI: 10.1007/978-3-319-71649-7_5 - @@ -4388,9 +4327,6 @@ DOI: 10.1007/978-3-319-71649-7_5 2020-10-12 04:19:57 arXiv: 2008.07079 - - Comment: 12 pages, 5 tables and 10 figures; submitted to the ICONIP 2020 - attachment arXiv Fulltext PDF @@ -4749,7 +4685,6 @@ DOI: 10.1007/978-3-319-71649-7_5 - @@ -4769,9 +4704,6 @@ DOI: 10.1007/978-3-319-71649-7_5 2021-01-02 18:13:09 arXiv: 2009.12974 - - Comment: 8 pages, 6 figures - attachment arXiv Fulltext PDF @@ -4829,7 +4761,6 @@ DOI: 10.1007/978-3-319-71649-7_5 - @@ -4849,9 +4780,6 @@ DOI: 10.1007/978-3-319-71649-7_5 2021-01-02 18:17:10 arXiv: 2007.15895 - - Comment: 19 pages - attachment arXiv Fulltext PDF @@ -5030,11 +4958,2892 @@ DOI: 10.1007/978-3-319-71649-7_5 1 text/html + + journalArticle + + arXiv:1511.08099 [cs] + + + + + + Cuayáhuitl + Heriberto + + + + + Keizer + Simon + + + + + Lemon + Oliver + + + + + + + + + Computer Science - Artificial Intelligence + + + + + Computer Science - Machine Learning + + + Strategic Dialogue Management via Deep Reinforcement Learning + Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities. + 2015-11-25 + arXiv.org + + + http://arxiv.org/abs/1511.08099 + + + 2021-01-02 18:29:38 + arXiv: 1511.08099 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1511.08099.pdf + + + 2021-01-02 18:29:43 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1511.08099 + + + 2021-01-02 18:29:50 + 1 + text/html + + + conferencePaper + + + + + Applying Neural Networks and Genetic Programming to the Game Lost Cities + + + https://minds.wisconsin.edu/bitstream/handle/1793/79080/LydeenSpr18.pdf?sequence=1&isAllowed=y + + + + + attachment + LydeenSpr18.pdf + + + https://minds.wisconsin.edu/bitstream/handle/1793/79080/LydeenSpr18.pdf + + + 2021-06-12 17:03:24 + 3 + + + report + A summary of a dissertation on Azul + + + https://old.reddit.com/r/boardgames/comments/hxodaf/update_i_wrote_my_dissertation_on_azul/ + + + + + conferencePaper + + + + Ceramic: A research environment based on the multi-player strategic board game Azul + + + https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=207669&item_no=1&attribute_id=1&file_no=1 + + + + + computerProgram + Ceramic: A research environment based on the multi-player strategic board game Azul + + + https://github.com/Swynfel/ceramic + + + + + report + Blokus Game Solver + + + https://digitalcommons.calpoly.edu/cpesp/290/ + + + + + conferencePaper + + + ISBN 978-1-4799-2198-0 978-1-4799-2199-7 + 2013 International Conference on Field-Programmable Technology (FPT) + DOI 10.1109/FPT.2013.6718426 + + + + + + + Kyoto, Japan + + + IEEE + + + + + + + Yoza + Takashi + + + + + Moriwaki + Retsu + + + + + Torigai + Yuki + + + + + Kamikubo + Yuki + + + + + Kubota + Takayuki + + + + + Watanabe + Takahiro + + + + + Fujimori + Takumi + + + + + Ito + Hiroyuki + + + + + Seo + Masato + + + + + Akagi + Kouta + + + + + Yamaji + Yuichiro + + + + + Watanabe + Minoru + + + + + + FPGA Blokus Duo Solver using a massively parallel architecture + 12/2013 + DOI.org (Crossref) + + + http://ieeexplore.ieee.org/document/6718426/ + + + 2021-06-28 14:38:57 + 494-497 + + + 2013 International Conference on Field-Programmable Technology (FPT) + + + + + attachment + Full Text + + + https://zero.sci-hub.se/2654/a4d3e713290066b6db7db1d9eedd194e/yoza2013.pdf#view=FitH + + + 2021-06-28 14:39:08 + 1 + application/pdf + + + conferencePaper + + + ISBN 978-1-4799-0565-2 978-1-4799-0562-1 978-1-4799-0563-8 + The 17th CSI International Symposium on Computer Architecture & Digital Systems (CADS 2013) + DOI 10.1109/CADS.2013.6714256 + + + + + + + Tehran, Iran + + + IEEE + + + + + + + Jahanshahi + Ali + + + + + Taram + Mohammad Kazem + + + + + Eskandari + Nariman + + + + + + Blokus Duo game on FPGA + 10/2013 + DOI.org (Crossref) + + + http://ieeexplore.ieee.org/document/6714256/ + + + 2021-06-28 14:39:04 + 149-152 + + + 2013 17th CSI International Symposium on Computer Architecture and Digital Systems (CADS) + + + + + attachment + Full Text + + + https://zero.sci-hub.se/3228/9ae6ca1efab5a2ebb63dd4e22a13bf04/jahanshahi2013.pdf#view=FitH + + + 2021-06-28 14:39:07 + 1 + application/pdf + + + journalArticle + + + The World Wide Web Conference + DOI 10.1145/3308558.3314131 + + + + + + + Hsu + Chao-Chun + + + + + Chen + Yu-Hua + + + + + Chen + Zi-Yuan + + + + + Lin + Hsin-Yu + + + + + Huang + Ting-Hao 'Kenneth' + + + + + Ku + Lun-Wei + + + + + + + + + Computer Science - Computation and Language + + + Dixit: Interactive Visual Storytelling via Term Manipulation + In this paper, we introduce Dixit, an interactive visual storytelling system that the user interacts with iteratively to compose a short story for a photo sequence. The user initiates the process by uploading a sequence of photos. Dixit first extracts text terms from each photo which describe the objects (e.g., boy, bike) or actions (e.g., sleep) in the photo, and then allows the user to add new terms or remove existing terms. Dixit then generates a short story based on these terms. Behind the scenes, Dixit uses an LSTM-based model trained on image caption data and FrameNet to distill terms from each image and utilizes a transformer decoder to compose a context-coherent story. Users change images or terms iteratively with Dixit to create the most ideal story. Dixit also allows users to manually edit and rate stories. The proposed procedure opens up possibilities for interpretable and controllable visual storytelling, allowing users to understand the story formation rationale and to intervene in the generation process. + 2019-05-13 + Dixit + arXiv.org + + + http://arxiv.org/abs/1903.02230 + + + 2021-06-28 14:40:29 + arXiv: 1903.02230 + 3531-3535 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1903.02230.pdf + + + 2021-06-28 14:40:38 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1903.02230 + + + 2021-06-28 14:40:43 + 1 + text/html + + + computerProgram + Dominion Simulator + + + https://dominionsimulator.wordpress.com/f-a-q/ + + + + + computerProgram + Dominion Simulator Source Code + + + https://github.com/mikemccllstr/dominionstats/ + + + + + blogPost + + + + Best and worst openings in Dominion + + + http://councilroom.com/openings + + + + + blogPost + + + + Optimal Card Ratios in Dominion + + + http://councilroom.com/optimal_card_ratios + + + + + blogPost + + + + Card Winning Stats on Dominion Server + + + http://councilroom.com/supply_win + + + + + forumPost + + + + Dominion Strategy Forum + + + http://forum.dominionstrategy.com/index.php + + + + + journalArticle + + arXiv:1811.11273 [cs] + + + + + + Bendekgey + Henry + + + + + + + + + Computer Science - Artificial Intelligence + + + Clustering Player Strategies from Variable-Length Game Logs in Dominion + We present a method for encoding game logs as numeric features in the card game Dominion. We then run the manifold learning algorithm t-SNE on these encodings to visualize the landscape of player strategies. By quantifying game states as the relative prevalence of cards in a player's deck, we create visualizations that capture qualitative differences in player strategies. Different ways of deviating from the starting game state appear as different rays in the visualization, giving it an intuitive explanation. This is a promising new direction for understanding player strategies across games that vary in length. + 2018-12-12 + arXiv.org + + + http://arxiv.org/abs/1811.11273 + + + 2021-06-28 14:43:21 + arXiv: 1811.11273 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1811.11273.pdf + + + 2021-06-28 14:43:27 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1811.11273 + + + 2021-06-28 14:43:31 + 1 + text/html + + + computerProgram + Hanabi Open Agent Dataset + + + https://github.com/aronsar/hoad + + + + + conferencePaper + + + + Hanabi Open Agent Dataset + + + https://dl.acm.org/doi/10.5555/3463952.3464188 + + + + + journalArticle + + arXiv:2010.02923 [cs] + + + + + + Gray + Jonathan + + + + + Lerer + Adam + + + + + Bakhtin + Anton + + + + + Brown + Noam + + + + + + + + + Computer Science - Artificial Intelligence + + + + + Computer Science - Machine Learning + + + + + Computer Science - Computer Science and Game Theory + + + Human-Level Performance in No-Press Diplomacy via Equilibrium Search + Prior AI breakthroughs in complex games have focused on either the purely adversarial or purely cooperative settings. In contrast, Diplomacy is a game of shifting alliances that involves both cooperation and competition. For this reason, Diplomacy has proven to be a formidable research challenge. In this paper we describe an agent for the no-press variant of Diplomacy that combines supervised learning on human data with one-step lookahead search via regret minimization. Regret minimization techniques have been behind previous AI successes in adversarial games, most notably poker, but have not previously been shown to be successful in large-scale games involving cooperation. We show that our agent greatly exceeds the performance of past no-press Diplomacy bots, is unexploitable by expert humans, and ranks in the top 2% of human players when playing anonymous games on a popular Diplomacy website. + 2021-05-03 + arXiv.org + + + http://arxiv.org/abs/2010.02923 + + + 2021-06-28 15:28:02 + arXiv: 2010.02923 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/2010.02923.pdf + + + 2021-06-28 15:28:18 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/2010.02923 + + + 2021-06-28 15:28:22 + 1 + text/html + + + journalArticle + + arXiv:1708.01503 [math] + + + + + + Akiyama + Rika + + + + + Abe + Nozomi + + + + + Fujita + Hajime + + + + + Inaba + Yukie + + + + + Hataoka + Mari + + + + + Ito + Shiori + + + + + Seita + Satomi + + + + + + + + + 55A20 (Primary), 05A99 (Secondary) + + + + + Mathematics - Combinatorics + + + + + Mathematics - Geometric Topology + + + + + Mathematics - History and Overview + + + Maximum genus of the Jenga like configurations + We treat the boundary of the union of blocks in the Jenga game as a surface with a polyhedral structure and consider its genus. We generalize the game and determine the maximum genus of the generalized game. + 2018-08-31 + arXiv.org + + + http://arxiv.org/abs/1708.01503 + + + 2021-06-28 15:28:12 + arXiv: 1708.01503 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1708.01503.pdf + + + 2021-06-28 15:28:21 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1708.01503 + + + 2021-06-28 15:28:24 + 1 + text/html + + + journalArticle + + arXiv:1905.08617 [cs] + + + + + + Bai + Chongyang + + + + + Bolonkin + Maksim + + + + + Burgoon + Judee + + + + + Chen + Chao + + + + + Dunbar + Norah + + + + + Singh + Bharat + + + + + Subrahmanian + V. S. + + + + + Wu + Zhe + + + + + + + + + Computer Science - Artificial Intelligence + + + + + Computer Science - Computer Vision and Pattern Recognition + + + Automatic Long-Term Deception Detection in Group Interaction Videos + Most work on automated deception detection (ADD) in video has two restrictions: (i) it focuses on a video of one person, and (ii) it focuses on a single act of deception in a one or two minute video. In this paper, we propose a new ADD framework which captures long term deception in a group setting. We study deception in the well-known Resistance game (like Mafia and Werewolf) which consists of 5-8 players of whom 2-3 are spies. Spies are deceptive throughout the game (typically 30-65 minutes) to keep their identity hidden. We develop an ensemble predictive model to identify spies in Resistance videos. We show that features from low-level and high-level video analysis are insufficient, but when combined with a new class of features that we call LiarRank, produce the best results. We achieve AUCs of over 0.70 in a fully automated setting. Our demo can be found at http://home.cs.dartmouth.edu/~mbolonkin/scan/demo/ + 2019-06-15 + arXiv.org + + + http://arxiv.org/abs/1905.08617 + + + 2021-06-28 15:32:49 + arXiv: 1905.08617 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1905.08617.pdf + + + 2021-06-28 15:32:54 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1905.08617 + + + 2021-06-28 15:32:58 + 1 + text/html + + + bookSection + + + 10068 + ISBN 978-3-319-50934-1 978-3-319-50935-8 + Computers and Games + + + + + + + Cham + + + Springer International Publishing + + + + + + + Plaat + Aske + + + + + Kosters + Walter + + + + + van den Herik + Jaap + + + + + + + + + Bi + Xiaoheng + + + + + Tanaka + Tetsuro + + + + + + Human-Side Strategies in the Werewolf Game Against the Stealth Werewolf Strategy + 2016 + DOI.org (Crossref) + + + http://link.springer.com/10.1007/978-3-319-50935-8_9 + + + 2021-06-28 15:32:54 + Series Title: Lecture Notes in Computer Science +DOI: 10.1007/978-3-319-50935-8_9 + 93-102 + + + attachment + Full Text + + + https://sci-hub.se/downloads/2019-01-26//f7/bi2016.pdf#view=FitH + + + 2021-06-28 15:33:08 + 1 + application/pdf + + + journalArticle + + arXiv:0804.0071 [math] + + + + + + Yao + Erlin + + + + + + + + 65C20 + + + 91-01 + + + + Mathematics - Probability + + + A Theoretical Study of Mafia Games + Mafia can be described as an experiment in human psychology and mass hysteria, or as a game between informed minority and uninformed majority. Focus on a very restricted setting, Mossel et al. [to appear in Ann. Appl. Probab. Volume 18, Number 2] showed that in the mafia game without detectives, if the civilians and mafias both adopt the optimal randomized strategy, then the two groups have comparable probabilities of winning exactly when the total player size is R and the mafia size is of order Sqrt(R). They also proposed a conjecture which stated that this phenomenon should be valid in a more extensive framework. In this paper, we first indicate that the main theorem given by Mossel et al. [to appear in Ann. Appl. Probab. Volume 18, Number 2] can not guarantee their conclusion, i.e., the two groups have comparable winning probabilities when the mafia size is of order Sqrt(R). Then we give a theorem which validates the correctness of their conclusion. In the last, by proving the conjecture proposed by Mossel et al. [to appear in Ann. Appl. Probab. Volume 18, Number 2], we generalize the phenomenon to a more extensive framework, of which the mafia game without detectives is only a special case. + 2008-04-01 + arXiv.org + + + http://arxiv.org/abs/0804.0071 + + + 2021-06-28 15:33:04 + arXiv: 0804.0071 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/0804.0071.pdf + + + 2021-06-28 15:33:07 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/0804.0071 + + + 2021-06-28 15:33:10 + 1 + text/html + + + bookSection + + + 11302 + ISBN 978-3-030-04178-6 978-3-030-04179-3 + Neural Information Processing + + + + + + + Cham + + + Springer International Publishing + + + + + + + Cheng + Long + + + + + Leung + Andrew Chi Sing + + + + + Ozawa + Seiichi + + + + + + + + + Zilio + Felipe + + + + + Prates + Marcelo + + + + + Lamb + Luis + + + + + + Neural Networks Models for Analyzing Magic: The Gathering Cards + 2018 + Neural Networks Models for Analyzing Magic + DOI.org (Crossref) + + + http://link.springer.com/10.1007/978-3-030-04179-3_20 + + + 2021-06-28 15:33:26 + Series Title: Lecture Notes in Computer Science +DOI: 10.1007/978-3-030-04179-3_20 + 227-239 + + + attachment + Submitted Version + + + https://arxiv.org/pdf/1810.03744 + + + 2021-06-28 15:33:36 + 1 + application/pdf + + + conferencePaper + + + + The Complexity of Deciding Legality of a Single Step of Magic: The Gathering + + + https://livrepository.liverpool.ac.uk/3029568/ + + + + + conferencePaper + + + + Magic: The Gathering in Common Lisp + + + https://vixra.org/abs/2001.0065 + + + + + computerProgram + Magic: The Gathering in Common Lisp + + + https://github.com/jeffythedragonslayer/maglisp + + + + + thesis + Mathematical programming and Magic: The Gathering + + + https://commons.lib.niu.edu/handle/10843/19194 + + + + + conferencePaper + + + + Deck Construction Strategies for Magic: The Gathering + + + https://www.doi.org/10.1685/CSC06077 + + + + + thesis + Deckbuilding in Magic: The Gathering Using a Genetic Algorithm + + + https://doi.org/11250/2462429 + + + + + report + Magic: The Gathering Deck Performance Prediction + + + http://cs229.stanford.edu/proj2012/HauPlotkinTran-MagicTheGatheringDeckPerformancePrediction.pdf + + + + + computerProgram + A constraint programming based solver for Modern Art + + + https://github.com/captn3m0/modernart + + + + + journalArticle + + arXiv:2103.00683 [cs] + + + + + + Haliem + Marina + + + + + Bonjour + Trevor + + + + + Alsalem + Aala + + + + + Thomas + Shilpa + + + + + Li + Hongyu + + + + + Aggarwal + Vaneet + + + + + Bhargava + Bharat + + + + + Kejriwal + Mayank + + + + + + + + + Computer Science - Artificial Intelligence + + + + + Computer Science - Machine Learning + + + Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement Learning and Imitation Learning Approach + Learning how to adapt and make real-time informed decisions in dynamic and complex environments is a challenging problem. To learn this task, Reinforcement Learning (RL) relies on an agent interacting with an environment and learning through trial and error to maximize the cumulative sum of rewards received by it. In multi-player Monopoly game, players have to make several decisions every turn which involves complex actions, such as making trades. This makes the decision-making harder and thus, introduces a highly complicated task for an RL agent to play and learn its winning strategies. In this paper, we introduce a Hybrid Model-Free Deep RL (DRL) approach that is capable of playing and learning winning strategies of the popular board game, Monopoly. To achieve this, our DRL agent (1) starts its learning process by imitating a rule-based agent (that resembles the human logic) to initialize its policy, (2) learns the successful actions, and improves its policy using DRL. Experimental results demonstrate an intelligent behavior of our proposed agent as it shows high win rates against different types of agent-players. + 2021-02-28 + Learning Monopoly Gameplay + arXiv.org + + + http://arxiv.org/abs/2103.00683 + + + 2021-06-28 15:48:08 + arXiv: 2103.00683 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/2103.00683.pdf + + + 2021-06-28 15:48:19 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/2103.00683 + + + 2021-06-28 15:48:23 + 1 + text/html + + + conferencePaper + + + ISBN 978-0-7803-7203-0 + Proceedings 2001 IEEE International Symposium on Computational Intelligence in Robotics and Automation (Cat. No.01EX515) + DOI 10.1109/CIRA.2001.1013210 + + + + + + + Banff, Alta., Canada + + + IEEE + + + + + + + Yasumura + Y. + + + + + Oguchi + K. + + + + + Nitta + K. + + + + + + Negotiation strategy of agents in the MONOPOLY game + 2001 + DOI.org (Crossref) + + + http://ieeexplore.ieee.org/document/1013210/ + + + 2021-06-28 15:49:10 + 277-281 + + + 2001 International Symposium on Computational Intelligence in Robotics and Automation + + + + + attachment + Full Text + + + https://moscow.sci-hub.se/3317/19346a5b777c1582800b51ee3a7cf5ed/negotiation-strategy-of-agents-in-the-monopoly-game.pdf#view=FitH + + + 2021-06-28 15:49:15 + 1 + application/pdf + + + conferencePaper + + + ISBN 978-1-4673-1194-6 978-1-4673-1193-9 978-1-4673-1192-2 + 2012 IEEE Conference on Computational Intelligence and Games (CIG) + DOI 10.1109/CIG.2012.6374168 + + + + + + + Granada, Spain + + + IEEE + + + + + + + Friberger + Marie Gustafsson + + + + + Togelius + Julian + + + + + + Generating interesting Monopoly boards from open data + 09/2012 + DOI.org (Crossref) + + + http://ieeexplore.ieee.org/document/6374168/ + + + 2021-06-28 15:49:18 + 288-295 + + + 2012 IEEE Conference on Computational Intelligence and Games (CIG) + + + + + attachment + Submitted Version + + + http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=81CA58D9ACCE8CA7412077093E520EFC?doi=10.1.1.348.6099&rep=rep1&type=pdf + + + 2021-06-28 15:49:32 + 1 + application/pdf + + + conferencePaper + + + ISBN 978-1-4244-5770-0 978-1-4244-5771-7 + Proceedings of the 2009 Winter Simulation Conference (WSC) + DOI 10.1109/WSC.2009.5429349 + + + + + + + Austin, TX, USA + + + IEEE + + + + + + + Friedman + Eric J. + + + + + Henderson + Shane G. + + + + + Byuen + Thomas + + + + + Gallardo + German Gutierrez + + + + + + Estimating the probability that the game of Monopoly never ends + 12/2009 + DOI.org (Crossref) + + + http://ieeexplore.ieee.org/document/5429349/ + + + 2021-06-28 15:49:23 + 380-391 + + + 2009 Winter Simulation Conference (WSC 2009) + + + + + attachment + Full Text + + + https://moscow.sci-hub.se/3233/bacac19e84c764b72c627d05f55c0ad9/friedman2009.pdf#view=FitH + + + 2021-06-28 15:49:32 + 1 + application/pdf + + + report + Learning to Play Monopoly with Monte Carlo Tree Search + + + https://project-archive.inf.ed.ac.uk/ug4/20181042/ug4_proj.pdf + + + + + conferencePaper + + + ISBN 978-1-72811-895-6 + TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON) + DOI 10.1109/TENCON.2019.8929523 + + + + + + + Kochi, India + + + IEEE + + + + + + + Arun + Edupuganti + + + + + Rajesh + Harikrishna + + + + + Chakrabarti + Debarka + + + + + Cherala + Harikiran + + + + + George + Koshy + + + + + + Monopoly Using Reinforcement Learning + 10/2019 + DOI.org (Crossref) + + + https://ieeexplore.ieee.org/document/8929523/ + + + 2021-06-28 15:49:50 + 858-862 + + + TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON) + + + + + attachment + Full Text + + + https://sci-hub.se/downloads/2020-04-10/35/arun2019.pdf?rand=60d9ef9f20b26#view=FitH + + + 2021-06-28 15:50:07 + 1 + application/pdf + + + report + A Markovian Exploration of Monopoly + + + https://pi4.math.illinois.edu/wp-content/uploads/2014/10/Gartland-Burson-Ferguson-Markovopoly.pdf + + + + + conferencePaper + + + + Learning to play Monopoly: A Reinforcement Learning approach + + + https://intelligence.csd.auth.gr/publication/conference-papers/learning-to-play-monopoly-a-reinforcement-learning-approach/ + + + + + presentation + What’s the Best Monopoly Strategy? + + + https://core.ac.uk/download/pdf/48614184.pdf + + + + + journalArticle + + + + + + Nakai + Kenichiro + + + + + Takenaga + Yasuhiko + + + + + + NP-Completeness of Pandemic + 2012 + en + DOI.org (Crossref) + + + https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_article + + + 2021-06-28 15:59:47 + 723-726 + + + 20 + Journal of Information Processing + DOI 10.2197/ipsjjip.20.723 + 3 + Journal of Information Processing + ISSN 1882-6652 + + + attachment + Full Text + + + https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_pdf + + + 2021-06-28 15:59:50 + 1 + application/pdf + + + thesis + On Solving Pentago + + + http://www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf + + + + + journalArticle + + + arXiv:1906.02330 [cs, stat] + + + + + + + Serrino + Jack + + + + + Kleiman-Weiner + Max + + + + + Parkes + David C. + + + + + Tenenbaum + Joshua B. + + + + + + + + + Computer Science - Machine Learning + + + + + Statistics - Machine Learning + + + + + Computer Science - Multiagent Systems + + + Finding Friend and Foe in Multi-Agent Games + Recent breakthroughs in AI for multi-agent games like Go, Poker, and Dota, have seen great strides in recent years. Yet none of these games address the real-life challenge of cooperation in the presence of unknown and uncertain teammates. This challenge is a key game mechanism in hidden role games. Here we develop the DeepRole algorithm, a multi-agent reinforcement learning agent that we test on The Resistance: Avalon, the most popular hidden role game. DeepRole combines counterfactual regret minimization (CFR) with deep value networks trained through self-play. Our algorithm integrates deductive reasoning into vector-form CFR to reason about joint beliefs and deduce partially observable actions. We augment deep value networks with constraints that yield interpretable representations of win probabilities. These innovations enable DeepRole to scale to the full Avalon game. Empirical game-theoretic methods show that DeepRole outperforms other hand-crafted and learned agents in five-player Avalon. DeepRole played with and against human players on the web in hybrid human-agent teams. We find that DeepRole outperforms human players as both a cooperator and a competitor. + 2019-06-05 + arXiv.org + + + http://arxiv.org/abs/1906.02330 + + + 2021-06-28 16:00:28 + arXiv: 1906.02330 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1906.02330.pdf + + + 2021-06-28 16:00:35 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1906.02330 + + + 2021-06-28 16:00:38 + 1 + text/html + + + thesis + A Mathematical Analysis of the Game of Santorini + + + https://openworks.wooster.edu/independentstudy/8917/ + + + + + computerProgram + A Mathematical Analysis of the Game of Santorini + + + https://github.com/carsongeissler/SantoriniIS + + + + + report + The complexity of Scotland Yard + + + https://eprints.illc.uva.nl/id/eprint/193/1/PP-2006-18.text.pdf + + + + + conferencePaper + + + ISBN 978-1-4799-3547-5 + 2014 IEEE Conference on Computational Intelligence and Games + DOI 10.1109/CIG.2014.6932907 + + + + + + + Dortmund, Germany + + + IEEE + + + + + + + Szubert + Marcin + + + + + Jaskowski + Wojciech + + + + + + Temporal difference learning of N-tuple networks for the game 2048 + 8/2014 + DOI.org (Crossref) + + + http://ieeexplore.ieee.org/document/6932907/ + + + 2021-06-28 16:09:20 + 1-8 + + + 2014 IEEE Conference on Computational Intelligence and Games (CIG) + + + + + attachment + Submitted Version + + + https://www.cs.put.poznan.pl/mszubert/pub/szubert2014cig.pdf + + + 2021-06-28 16:09:26 + 1 + application/pdf + + + journalArticle + + arXiv:1501.03837 [cs] + + + + + + Abdelkader + Ahmed + + + + + Acharya + Aditya + + + + + Dasler + Philip + + + + + + + + + Computer Science - Computational Complexity + + + + F.2.2 + + On the Complexity of Slide-and-Merge Games + We study the complexity of a particular class of board games, which we call `slide and merge' games. Namely, we consider 2048 and Threes, which are among the most popular games of their type. In both games, the player is required to slide all rows or columns of the board in one direction to create a high value tile by merging pairs of equal tiles into one with the sum of their values. This combines features from both block pushing and tile matching puzzles, like Push and Bejeweled, respectively. We define a number of natural decision problems on a suitable generalization of these games and prove NP-hardness for 2048 by reducing from 3SAT. Finally, we discuss the adaptation of our reduction to Threes and conjecture a similar result. + 2015-01-15 + arXiv.org + + + http://arxiv.org/abs/1501.03837 + + + 2021-06-28 16:09:34 + arXiv: 1501.03837 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1501.03837.pdf + + + 2021-06-28 16:09:48 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1501.03837 + + + 2021-06-28 16:09:52 + 1 + text/html + + + journalArticle + + + DOI 10.4230/LIPICS.FUN.2016.1 + + + + + + + Abdelkader + Ahmed + + + + + Acharya + Aditya + + + + + Dasler + Philip + + + + + + + + + Herbstritt + Marc + + + + + + + 000 Computer science, knowledge, general works + + + + + Computer Science + + + 2048 Without New Tiles Is Still Hard + 2016 + en + DOI.org (Datacite) + + + http://drops.dagstuhl.de/opus/volltexte/2016/5885/ + + + 2021-06-28 16:09:58 + Artwork Size: 14 pages +Medium: application/pdf +Publisher: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany + 14 pages + + + conferencePaper + + + + MDA: A Formal Approach to Game Design and Game Research + + + https://aaai.org/Library/Workshops/2004/ws04-04-001.php + + + + + conferencePaper + + + 6 + ISBN 2342-9666 + Think Design Play + + + + + DiGRA/Utrecht School of the Arts + + + Exploring anonymity in cooperative board games + This study was done as a part of a larger research project where the interest was on exploring if and how gameplay design could give informative principles to the design of educational activities. The researchers conducted a series of studies trying to map game mechanics that had the special quality of being inclusive, i.e., playable by a diverse group of players. This specific study focused on designing a cooperative board game with the goal of implementing anonymity as a game mechanic. Inspired by the gameplay design patterns methodology (Björk & Holopainen 2005a; 2005b; Holopainen & Björk 2008), mechanics from existing cooperative board games were extracted and analyzed in order to inform the design process. The results from prototyping and play testing indicated that it is possible to implement anonymous actions in cooperative board games and that this mechanic made rather unique forms of gameplay possible. These design patterns can be further developed in order to address inclusive educational practices. + January 2011 + + + http://www.digra.org/digital-library/publications/exploring-anonymity-in-cooperative-board-games/ + + + + + 2011 DiGRA International Conference + + + + + journalArticle + + arXiv:2107.07630 [cs] + + + + + + Siu + Ho Chit + + + + + Pena + Jaime D. + + + + + Chang + Kimberlee C. + + + + + Chen + Edenna + + + + + Zhou + Yutai + + + + + Lopez + Victor J. + + + + + Palko + Kyle + + + + + Allen + Ross E. + + + + + + + + + Computer Science - Artificial Intelligence + + + + + Computer Science - Human-Computer Interaction + + + Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi + Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance. + 2021-07-19 + arXiv.org + + + http://arxiv.org/abs/2107.07630 + + + 2021-07-24 06:30:44 + arXiv: 2107.07630 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/2107.07630.pdf + + + 2021-07-24 06:31:01 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/2107.07630 + + + 2021-07-24 06:31:06 + 1 + text/html + + + journalArticle + + + + + + Litwiller + Bonnie H. + + + + + Duncan + David R. + + + + + Probabilites In Yahtzee + Teachers of units in probability are often interested in providing examples of probabilistic situations in a nonclassroom setting. Games are a rich source of such probabilities. Many people enjoy playing a commercial game called Yahtzee. A Yahtzee player receives points for achieving various specified numerical combinations of five dice during the three rolls that constitute a turn. + 12/1982 + DOI.org (Crossref) + + + https://pubs.nctm.org/view/journals/mt/75/9/article-p751.xml + + + 2021-07-24 07:53:57 + 751-754 + + + 75 + The Mathematics Teacher + DOI 10.5951/MT.75.9.0751 + 9 + MT + ISSN 0025-5769, 2330-0582 + + + presentation + + + + + Verhoeff + Tom + + + + + Optimal Solitaire Yahtzee Strategies + + + http://www.yahtzee.org.uk/optimal_yahtzee_TV.pdf + + + + + journalArticle + + + + + + + + Bonarini + Andrea + + + + + Lazaric + Alessandro + + + + + Restelli + Marcello + + + + + Yahtzee: a Large Stochastic Environment for RL Benchmarks + Yahtzee is a game that is regularly played by more than 100 million people in the world. We +propose a simplified version of Yahtzee as a benchmark for RL algorithms. We have already +used it for this purpose, and an implementation is available. + + + http://researchers.lille.inria.fr/~lazaric/Webpage/PublicationsByTopic_files/bonarini2005yahtzee.pdf + + + 1 + + + thesis + + + KTH, School of Computer Science and Communication (CSC) + + + + + + + Serra + Andreas + + + + + Niigata + Kai Widell + + + + + Optimal Yahtzee performance in multi-player games + Yahtzee is a game with a moderately large search space, dependent on the factor of luck. This makes it not quite trivial to implement an optimal strategy for it. Using the optimal strategy for single-player +use, comparisons against other algorithms are made and the results are analyzed for hints on what it could take to make an algorithm that could beat the single-player optimal strategy. + April 12, 2013 + en + http://www.diva-portal.org/smash/get/diva2:668705/FULLTEXT01.pdf + + + https://www.csc.kth.se/utbildning/kth/kurser/DD143X/dkand13/Group4Per/report/12-serra-widell-nigata.pdf + + + 17 + Independent thesis Basic level (degree of Bachelor) + + + manuscript + + + + + Verhoeff + Tom + + + + + How to Maximize Your Score in Solitaire Yahtzee + Yahtzee is a well-known game played with five dice. Players take turns at assembling and scoring dice patterns. The player with the highest score wins. Solitaire Yahtzee is a single-player version of Yahtzee aimed at maximizing one’s score. A strategy for playing Yahtzee determines which choice to make in each situation of the game. We show that the maximum expected score over all Solitaire Yahtzee strategies is 254.5896. . . . + en + + + http://www-set.win.tue.nl/~wstomv/misc/yahtzee/yahtzee-report-unfinished.pdf + + + 18 + Incomplete Draft + + + thesis + + + Yale University, Department of Computer Science + + + + + + + Vasseur + Philip + + + + + Using Deep Q-Learning to Compare Strategy Ladders of Yahtzee + “Bots” playing games is not a new concept, +likely going back to the first video games. However, +there has been a new wave recently using machine +learning to learn to play games at a near optimal +level - essentially using neural networks to “solve” +games. Depending on the game, this can be relatively +straight forward using supervised learning. However, +this requires having data for optimal play, which is +often not possible due to the sheer complexity of many +games. For example, solitaire Yahtzee has this data +available, but two player Yahtzee does not due to the +massive state space. A recent trend in response to this +started with Google Deep Mind in 2013, who used Deep +Reinforcement Learning to play various Atari games +[4]. +This project will apply Deep Reinforcement Learning +(specifically Deep Q-Learning) and measure how an +agent learns to play Yahtzee in the form of a strategy +ladder. A strategy ladder is a way of looking at how +the performance of an AI varies with the computational +resources it uses. Different sets of rules changes how the +the AI learns which varies the strategy ladder itself. This +project will vary the upper bonus threshold and then +attempt to measure how “good” the various strategy +ladders are - in essence attempting to find the set of +rules which creates the “best” version of Yahtzee. We +assume/expect that there is some correlation between +strategy ladders for AI and strategy ladders for human, +meaning that a game with a “good” strategy ladder for +an AI indicates that game is interesting and challenging +for humans. + December 12, 2019 + en + + + https://raw.githubusercontent.com/philvasseur/Yahtzee-DQN-Thesis/dcf2bfe15c3b8c0ff3256f02dd3c0aabdbcbc9bb/webpage/final_report.pdf + + + 12 + + + report + + + KTH Royal Institute Of Technology Computer Science And Communication + + + Defensive Yahtzee + In this project an algorithm has been created that plays Yahtzee using rule +based heuristics. The focus is getting a high lowest score and a high 10th +percentile. All rules of Yahtzee and the probabilities for each combination +have been studied and based on this each turn is optimized to get a +guaranteed decent high score. The algorithm got a lowest score of 79 and a +10th percentile of 152 when executed 100 000 times. + https://www.diva-portal.org/smash/get/diva2:817838/FULLTEXT01.pdf + + + http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168668 + + + 22 + + + report + + + + + Glenn + James + + + + + An Optimal Strategy for Yahtzee + + + http://www.cs.loyola.edu/~jglenn/research/optimal_yahtzee.pdf + + + + + presentation + + + + + Middlebury College + + + + + + + + + R. Teal Witter + + + + + Alex Lyford + + + + + + Applications of Graph Theory and Probability in the Board Game Ticket to Ride + January 16, 2020 + + + https://www.rtealwitter.com/slides/2020-JMM.pdf + + + + + attachment + Full Text + + + https://www.rtealwitter.com/slides/2020-JMM.pdf + + + 2021-07-24 08:18:37 + 1 + application/pdf + + + journalArticle + + arXiv:1511.08099 [cs] + + + + + + Cuayáhuitl + Heriberto + + + + + Keizer + Simon + + + + + Lemon + Oliver + + + + + + + + + Computer Science - Artificial Intelligence + + + + + Computer Science - Machine Learning + + + Strategic Dialogue Management via Deep Reinforcement Learning + Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities. + 2015-11-25 + arXiv.org + + + http://arxiv.org/abs/1511.08099 + + + 2021-07-24 08:23:51 + arXiv: 1511.08099 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1511.08099.pdf + + + 2021-07-24 08:23:57 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1511.08099 + + + 2021-07-24 08:24:01 + 1 + text/html + + + conferencePaper + + + ISBN 978-92-837-2336-3 + 14th NATO Operations Research and Analysis (OR&A) Conference: Emerging and Disruptive Technology + DOI 10.14339/STO-MP-SAS-OCS-ORA-2020-WCM-01-PDF + + + + NATO + + + + + + Christoffer Limér + + + + + Erik Kalmér + + + + + Mika Cohen + + + + + + Monte Carlo Tree Search for Risk + 2/16/2021 + en + AC/323(SAS-ACT)TP/1017 + + + https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf + + + + + attachment + Full Text + + + https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf + + + 2021-07-24 08:34:15 + 1 + application/pdf + + + presentation + + + + + Christoffer Limér + + + + + Erik Kalmér + + + + + + Wargaming with Monte-Carlo Tree Search + 2/16/2021 + en + + + https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf + + + + + attachment + Full Text + + + https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf + + + 2021-07-24 08:35:04 + 1 + application/pdf + + + journalArticle + + arXiv:1910.04376 [cs] + + + + + + Zha + Daochen + + + + + Lai + Kwei-Herng + + + + + Cao + Yuanpu + + + + + Huang + Songyi + + + + + Wei + Ruzhe + + + + + Guo + Junyu + + + + + Hu + Xia + + + + + + + + + Computer Science - Artificial Intelligence + + + RLCard: A Toolkit for Reinforcement Learning in Card Games + RLCard is an open-source toolkit for reinforcement learning research in card games. It supports various card environments with easy-to-use interfaces, including Blackjack, Leduc Hold'em, Texas Hold'em, UNO, Dou Dizhu and Mahjong. The goal of RLCard is to bridge reinforcement learning and imperfect information games, and push forward the research of reinforcement learning in domains with multiple agents, large state and action space, and sparse reward. In this paper, we provide an overview of the key components in RLCard, a discussion of the design principles, a brief introduction of the interfaces, and comprehensive evaluations of the environments. The codes and documents are available at https://github.com/datamllab/rlcard + 2020-02-14 + RLCard + arXiv.org + + + http://arxiv.org/abs/1910.04376 + + + 2021-07-24 08:40:55 + arXiv: 1910.04376 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/1910.04376.pdf + + + 2021-07-24 08:40:59 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/1910.04376 + + + 2021-07-24 08:41:03 + 1 + text/html + + + journalArticle + + arXiv:2009.12065 [cs] + + + + + + Gaina + Raluca D. + + + + + Balla + Martin + + + + + Dockhorn + Alexander + + + + + Montoliu + Raul + + + + + Perez-Liebana + Diego + + + + + + + + + Computer Science - Artificial Intelligence + + + Design and Implementation of TAG: A Tabletop Games Framework + This document describes the design and implementation of the Tabletop Games framework (TAG), a Java-based benchmark for developing modern board games for AI research. TAG provides a common skeleton for implementing tabletop games based on a common API for AI agents, a set of components and classes to easily add new games and an import module for defining data in JSON format. At present, this platform includes the implementation of seven different tabletop games that can also be used as an example for further developments. Additionally, TAG also incorporates logging functionality that allows the user to perform a detailed analysis of the game, in terms of action space, branching factor, hidden information, and other measures of interest for Game AI research. The objective of this document is to serve as a central point where the framework can be described at length. TAG can be downloaded at: https://github.com/GAIGResearch/TabletopGames + 2020-09-25 + Design and Implementation of TAG + arXiv.org + + + http://arxiv.org/abs/2009.12065 + + + 2021-07-24 08:41:01 + arXiv: 2009.12065 + + + attachment + arXiv Fulltext PDF + + + https://arxiv.org/pdf/2009.12065.pdf + + + 2021-07-24 08:41:07 + 1 + application/pdf + + + attachment + arXiv.org Snapshot + + + https://arxiv.org/abs/2009.12065 + + + 2021-07-24 08:41:11 + 1 + text/html + + + computerProgram + + + + + Adam Stelmaszczyk + + + + + Game Tree Search Algorithms - C++ library for AI bot programming. + 2015 + Game Tree Search Algorithms + + + https://github.com/AdamStelmaszczyk/gtsa + + + C++ + + + computerProgram + + + + + Raluca D. Gaina + + + + + Martin Balla + + + + + Alexander Dockhorn + + + + + Raul Montoliu + + + + + Diego Perez-Liebana + + + + + TAG: Tabletop Games Framework + The Tabletop Games Framework (TAG) is a Java-based benchmark for developing modern board games for AI research. TAG provides a common skeleton for implementing tabletop games based on a common API for AI agents, a set of components and classes to easily add new games and an import module for defining data in JSON format. At present, this platform includes the implementation of seven different tabletop games that can also be used as an example for further developments. Additionally, TAG also incorporates logging functionality that allows the user to perform a detailed analysis of the game, in terms of action space, branching factor, hidden information, and other measures of interest for Game AI research. + + + https://github.com/GAIGResearch/TabletopGames + + + MIT License + Java + + + 2048 + + + + + + + + + + + + Accessibility + + Azul + + + + + + Blokus + + + + Carcassonne @@ -5045,10 +7854,34 @@ DOI: 10.1007/978-3-319-71649-7_5 + Dixit + + + + Dominion + + + + + + + + + + Frameworks + + + + + + + Game Design + + Hanabi @@ -5075,6 +7908,9 @@ DOI: 10.1007/978-3-319-71649-7_5 + + + Hive @@ -5083,6 +7919,7 @@ DOI: 10.1007/978-3-319-71649-7_5 Jenga + Kingdomino @@ -5090,9 +7927,16 @@ DOI: 10.1007/978-3-319-71649-7_5 + + Lost Cities + + Mafia + + + Magic: The Gathering @@ -5102,39 +7946,36 @@ DOI: 10.1007/978-3-319-71649-7_5 + + + + + + + + Mobile Games - - - 2048 - - - - - - - - - - - 2048 - - - - - - - - - + + Modern Art: The card game + Monopoly + + + + + + + + + Monopoly Deal @@ -5145,12 +7986,20 @@ DOI: 10.1007/978-3-319-71649-7_5 + + Pandemic + + Patchwork + + Pentago + + Quixo @@ -5160,6 +8009,10 @@ DOI: 10.1007/978-3-319-71649-7_5 Race for the Galaxy + + Resistance: Avalon + + RISK @@ -5170,7 +8023,18 @@ DOI: 10.1007/978-3-319-71649-7_5 - + + + + + + Santorini + + + + + Scotland Yard + Secret Hitler @@ -5183,19 +8047,20 @@ DOI: 10.1007/978-3-319-71649-7_5 Settlers of Catan - - + - + + + Shobu @@ -5232,5 +8097,13 @@ DOI: 10.1007/978-3-319-71649-7_5 + + + + + + + + diff --git a/to-markdown.xsl b/to-markdown.xsl new file mode 100644 index 0000000..ff60f38 --- /dev/null +++ b/to-markdown.xsl @@ -0,0 +1,41 @@ + + + + + + + + + # + + + - [ + + ]( + + ) ( + + ) + + + + + + + +