From d90ac00f75d427446f106b4cac63d8678dcec788 Mon Sep 17 00:00:00 2001 From: Nemo Date: Mon, 28 Jun 2021 20:50:44 +0530 Subject: [PATCH] Initial work on generated markdown file --- HACKING.md | 5 + HEADER.md | 19 +++ Makefile | 4 + README.md | 387 +++++++++++++++++++----------------------------- links.txt | 170 +++++++++++++++++++++ to-markdown.xsl | 41 +++++ 6 files changed, 393 insertions(+), 233 deletions(-) create mode 100644 HACKING.md create mode 100644 HEADER.md create mode 100644 Makefile create mode 100644 links.txt create mode 100644 to-markdown.xsl diff --git a/HACKING.md b/HACKING.md new file mode 100644 index 0000000..5290628 --- /dev/null +++ b/HACKING.md @@ -0,0 +1,5 @@ +# HACKING + +The primary source for everything is my Zotero instance. It exports a RDF file, which is then converted to markdown using XSLT. + +xsltproc to-markdown.xsl boardgame-research.rdf \ No newline at end of file diff --git a/HEADER.md b/HEADER.md new file mode 100644 index 0000000..9fa0bbe --- /dev/null +++ b/HEADER.md @@ -0,0 +1,19 @@ +# boardgame-research [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) + +This is a list of boardgame research. They are primarily related to "solving/playing/learning" games (by various different approaches), or +occasionaly about designing or meta-aspects of the game. This doesn't cover all aspects of each game (notably missing social-science stuff), but +should be of interest to anyone interested in boardgames and their optimal play. While there is a ton of easily accessible research on games like +Chess and Go, finding prior work on more contemporary games can be a bit hard. This list focuses on the latter. If you are interested in well-researched +games like Chess, Go, Hex, take a look at the [Chess programming wiki](https://www.chessprogramming.org/Games) instead. The list also covers some computer-games that fall under similar themes. + +Exported versions are available in the following formats: + +- [Zotero RDF](boardgame-research.rdf) +- [BibTeX](boardgame-research.bib) + +Watch the repository to get the latest updates for now. + +If you aren't able to access any paper on this list, please [try using Sci-Hub](https://en.wikipedia.org/wiki/Sci-Hub) or [reach out to me](https://captnemo.in/contact/). + + + \ No newline at end of file diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..e4eaf86 --- /dev/null +++ b/Makefile @@ -0,0 +1,4 @@ +all: + xsltproc to-markdown.xsl boardgame-research.rdf > /tmp/contents.md + cat HEADER.md /tmp/contents.md > README.md + doctoc README.md \ No newline at end of file diff --git a/README.md b/README.md index 3567c06..b986a05 100644 --- a/README.md +++ b/README.md @@ -17,34 +17,26 @@ If you aren't able to access any paper on this list, please [try using Sci-Hub]( -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* -- [Azul](#azul) -- [Blokus](#blokus) +- [Accessibility](#accessibility) - [Carcassonne](#carcassonne) - [Diplomacy](#diplomacy) - [Dixit](#dixit) -- [Dominion](#dominion) - [Hanabi](#hanabi) - [Hive](#hive) - [Jenga](#jenga) - [Kingdomino](#kingdomino) -- [Lost Cities](#lost-cities) - [Mafia](#mafia) -- [Magic: the Gathering](#magic-the-gathering) -- [Modern Art: The card game](#modern-art-the-card-game) +- [Magic: The Gathering](#magic-the-gathering) +- [Mobile Games](#mobile-games) +- [2048](#2048) - [Monopoly](#monopoly) - [Monopoly Deal](#monopoly-deal) - [Nmbr9](#nmbr9) -- [Pandemic](#pandemic) - [Patchwork](#patchwork) -- [Pentago](#pentago) - [Quixo](#quixo) - [Race for the Galaxy](#race-for-the-galaxy) -- [The Resistance: Avalon](#the-resistance-avalon) -- [Risk](#risk) -- [Santorini](#santorini) -- [Scotland Yard](#scotland-yard) +- [RISK](#risk) - [Secret Hitler](#secret-hitler) - [Set](#set) - [Settlers of Catan](#settlers-of-catan) @@ -53,250 +45,179 @@ If you aren't able to access any paper on this list, please [try using Sci-Hub]( - [Tetris Link](#tetris-link) - [Ticket to Ride](#ticket-to-ride) - [Ultimate Tic-Tac-Toe](#ultimate-tic-tac-toe) -- [Uno](#uno) +- [UNO](#uno) - [Yahtzee](#yahtzee) -- [Mobile Games](#mobile-games) - - [2048](#2048) -- [Game Design](#game-design) - - [Accessibility](#accessibility) -- [Frameworks/Toolkits](#frameworkstoolkits) -# Azul -- [A summary of a dissertation on Azul](https://old.reddit.com/r/boardgames/comments/hxodaf/update_i_wrote_my_dissertation_on_azul/) (unpublished) -- [Ceramic: A research environment based on the multi-player strategic board game Azul](https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=207669&item_no=1&attribute_id=1&file_no=1) [[GitHub](https://github.com/Swynfel/ceramic)] - -# Blokus -- [Blokus Game Solver](https://digitalcommons.calpoly.edu/cpesp/290/) -- [FPGA Blokus Duo Solver using a massively parallel architecture](https://doi.org/10.1109/FPT.2013.6718426) -- [Blokus Duo game on FPGA](https://doi.org/10.1109/CADS.2013.6714256) +# Accessibility +- [Meeple Centred Design: A Heuristic Toolkit for Evaluating the Accessibility of Tabletop Games](http://link.springer.com/10.1007/s40869-018-0057-8) (journalArticle) +- [Eighteen Months of Meeple Like Us: An Exploration into the State of Board Game Accessibility](http://link.springer.com/10.1007/s40869-018-0056-9) (journalArticle) # Carcassonne -- [Playing Carcassonne with Monte Carlo Tree Search](https://arxiv.org/abs/2009.12974) +- [Playing Carcassonne with Monte Carlo Tree Search](http://arxiv.org/abs/2009.12974) (journalArticle) # Diplomacy -- [Human-Level Performance in No-Press Diplomacy via Equilibrium Search](https://arxiv.org/abs/2010.02923) -- [Learning to Play No-Press Diplomacy with Best Response Policy Iteration ](https://arxiv.org/abs/2006.04635) -- [No Press Diplomacy: Modeling Multi-Agent Gameplay ](https://arxiv.org/abs/1909.02128) -- [Agent Madoff: A Heuristic-Based Negotiation Agent For The Diplomacy Strategy Game ](https://arxiv.org/abs/1902.06996) +- [Learning to Play No-Press Diplomacy with Best Response Policy Iteration](http://arxiv.org/abs/2006.04635v2) (journalArticle) +- [No Press Diplomacy: Modeling Multi-Agent Gameplay](http://arxiv.org/abs/1909.02128v2) (journalArticle) +- [Agent Madoff: A Heuristic-Based Negotiation Agent For The Diplomacy Strategy Game](http://arxiv.org/abs/1902.06996v1) (journalArticle) +- [Monte Carlo Tree Search for the Game of Diplomacy](https://dl.acm.org/doi/10.1145/3411408.3411413) (conferencePaper) # Dixit -- [Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game](https://arxiv.org/abs/2010.00048) -- [Dixit: Interactive Visual Storytelling via Term Manipulation](https://arxiv.org/abs/1903.02230) - -# Dominion - -There is a [simulator](https://dominionsimulator.wordpress.com/f-a-q/) and the code behind -[the Dominion server running councilroom.com](https://github.com/mikemccllstr/dominionstats/) is available. councilroom has the [best and worst openings](http://councilroom.com/openings), [optimal card ratios](http://councilroom.com/optimal_card_ratios), [Card winning stats](http://councilroom.com/supply_win) and lots of other empirical research. The [Dominion Strategy Forum](http://forum.dominionstrategy.com/index.php) is another good general resource. - -- [Clustering Player Strategies from Variable-Length Game Logs in Dominion](https://arxiv.org/abs/1811.11273) +- [Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game](http://arxiv.org/abs/2010.00048) (journalArticle) # Hanabi -- [Improving Policies via Search in Cooperative Partially Observable Games](https://arxiv.org/abs/1912.02318) (FB) [[code](https://github.com/facebookresearch/Hanabi_SPARTA)] - Current best result. -- [Re-determinizing MCTS in Hanabi](https://ieee-cog.org/2020/papers2019/paper_17.pdf) -- [Hanabi is NP-hard, Even for Cheaters who Look at Their Cards](https://arxiv.org/abs/1603.01911) -- [Evolving Agents for the Hanabi 2018 CIG Competition](https://ieeexplore.ieee.org/abstract/document/8490449) -- [Aspects of the Cooperative Card Game Hanabi](https://link.springer.com/chapter/10.1007/978-3-319-67468-1_7) -- [How to Make the Perfect Fireworks Display: Two Strategies for Hanabi](https://doi.org/10.4169/math.mag.88.5.323) -- [Playing Hanabi Near-Optimally](https://link.springer.com/chapter/10.1007/978-3-319-71649-7_5) -- [Evaluating and modelling Hanabi-playing agents](https://doi.org/10.1109/CEC.2017.7969465) -- [An intentional AI for hanabi](https://ieeexplore.ieee.org/abstract/document/8080417) -- [The Hanabi challenge: A new frontier for AI research](https://doi.org/10.1016/j.artint.2019.103216) [[arXiv](https://arxiv.org/abs/1902.00506)]] (DeepMind) -- [Solving Hanabi: Estimating Hands by Opponent's Actions in Cooperative Game with Incomplete Information](https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167/10193) -- [A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs](http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf) -- [I see what you see: Integrating eye tracking into Hanabi playing agents](http://www.exag.org/wp-content/uploads/2018/10/AIIDE-18_Upload_112.pdf) -- [The 2018 Hanabi competition](https://doi.org/10.1109/CIG.2019.8848008) -- [Diverse Agents for Ad-Hoc Cooperation in Hanabi](https://doi.org/10.1109/CIG.2019.8847944) [[arXiv](https://arxiv.org/pdf/2004.13710v2.pdf)] -- [State of the art Hanabi bots + simulation framework in rust](https://github.com/WuTheFWasThat/hanabi.rs) -- [A strategy simulator for the well-known cooperative card game Hanabi](https://github.com/rjtobin/HanSim) -- [A framework for writing bots that play Hanabi](https://github.com/Quuxplusone/Hanabi) -- [Evaluating the Rainbow DQN Agent in Hanabi with Unseen Partners](https://arxiv.org/abs/2004.13291) -- [Operationalizing Intentionality to Play Hanabi with Human Players](https://doi.org/10.1109/TG.2020.3009359) -- [Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents](https://ojs.aaai.org//index.php/AIIDE/article/view/7404) [[pdf](https://ojs.aaai.org/index.php/AIIDE/article/view/7404/7333)] -- [Playing mini-Hanabi card game with Q-learning](http://id.nii.ac.jp/1001/00205046/) -- [Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi](https://arxiv.org/abs/2004.13710) -- [Hanabi Open Agend Dataset](https://github.com/aronsar/hoad) - [[ACM](https://dl.acm.org/doi/abs/10.5555/3461017.3461244)] +- [How to Make the Perfect Fireworks Display: Two Strategies forHanabi](https://doi.org/10.4169%2Fmath.mag.88.5.323) (journalArticle) +- [Evaluating and modelling Hanabi-playing agents](https://doi.org/10.1109%2Fcec.2017.7969465) (conferencePaper) +- [The Hanabi challenge: A new frontier for AI research](https://doi.org/10.1016%2Fj.artint.2019.103216) (journalArticle) +- [The 2018 Hanabi competition](https://doi.org/10.1109%2Fcig.2019.8848008) (conferencePaper) +- [Diverse Agents for Ad-Hoc Cooperation in Hanabi](https://doi.org/10.1109%2Fcig.2019.8847944) (conferencePaper) +- [Improving Policies via Search in Cooperative Partially Observable Games](http://arxiv.org/abs/1912.02318v1) (journalArticle) +- [Hanabi is NP-hard, Even for Cheaters who Look at Their Cards](http://arxiv.org/abs/1603.01911v3) (journalArticle) +- [Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi](http://arxiv.org/abs/2004.13710v2) (journalArticle) +- [Evaluating the Rainbow DQN Agent in Hanabi with Unseen Partners](http://arxiv.org/abs/2004.13291v1) (journalArticle) +- [Re-determinizing MCTS in Hanabi]() (conferencePaper) +- [Evolving Agents for the Hanabi 2018 CIG Competition](https://ieeexplore.ieee.org/document/8490449/) (conferencePaper) +- [Aspects of the Cooperative Card Game Hanabi](http://link.springer.com/10.1007/978-3-319-67468-1_7) (bookSection) +- [Playing Hanabi Near-Optimally](http://link.springer.com/10.1007/978-3-319-71649-7_5) (bookSection) +- [An intentional AI for hanabi](http://ieeexplore.ieee.org/document/8080417/) (conferencePaper) +- [Solving Hanabi: Estimating Hands by Opponent's Actions in Cooperative Game with Incomplete Information](https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167) (conferencePaper) +- [A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs](http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf) (journalArticle) +- [I see what you see: Integrating eye tracking into Hanabi playing agents]() (journalArticle) +- [State of the art Hanabi bots + simulation framework in rust](https://github.com/WuTheFWasThat/hanabi.rs) (computerProgram) +- [A strategy simulator for the well-known cooperative card game Hanabi](https://github.com/rjtobin/HanSim) (computerProgram) +- [A framework for writing bots that play Hanabi](https://github.com/Quuxplusone/Hanabi) (computerProgram) +- [Operationalizing Intentionality to Play Hanabi with Human Players](https://ieeexplore.ieee.org/document/9140404/) (journalArticle) +- [Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents](https://ojs.aaai.org/index.php/AIIDE/article/view/7404) (journalArticle) +- [Playing mini-Hanabi card game with Q-learning](http://id.nii.ac.jp/1001/00205046/) (conferencePaper) # Hive -- [On the complexity of Hive](https://dspace.library.uu.nl/handle/1874/396955) +- [On the complexity of Hive](https://dspace.library.uu.nl/handle/1874/396955) (thesis) # Jenga -- [Maximum genus of the Jenga like configurations](https://arxiv.org/abs/1708.01503) -- [Jidoukan Jenga: Teaching English through remixing games and game rules](https://www.llpjournal.org/2020/04/13/jidokan-jenga.html) +- [Jidoukan Jenga: Teaching English through remixing games and game rules](https://www.llpjournal.org/2020/04/13/jidokan-jenga.html) (journalArticle) # Kingdomino -- [Monte Carlo Methods for the Game Kingdomino](https://doi.org/10.1109/CIG.2018.8490419) [[arXiv](https://arxiv.org/abs/1807.04458)] -- [NP-completeness of the game Kingdomino](https://arxiv.org/abs/1909.02849) - -# Lost Cities -- [Applying Neural Networks and Genetic Programming to the Game Lost Cities](http://digital.library.wisc.edu/1793/79080) +- [Monte Carlo Methods for the Game Kingdomino](https://doi.org/10.1109%2Fcig.2018.8490419) (conferencePaper) +- [Monte Carlo Methods for the Game Kingdomino](http://arxiv.org/abs/1807.04458v2) (journalArticle) +- [NP-completeness of the game Kingdomino](http://arxiv.org/abs/1909.02849v3) (journalArticle) # Mafia -- [A mathematical model of the Mafia game](https://arxiv.org/abs/1009.1031) -- [Automatic Long-Term Deception Detection in Group Interaction Videos](https://arxiv.org/abs/1905.08617) -- [Human-Side Strategies in the Werewolf Game Against the Stealth Werewolf Strategy](https://link.springer.com/chapter/10.1007/978-3-319-50935-8_9) -- [A Theoretical Study of Mafia Games](https://arxiv.org/abs/0804.0071) +- [A mathematical model of the Mafia game](http://arxiv.org/abs/1009.1031v3) (journalArticle) -# Magic: the Gathering -- [Magic: the Gathering is as Hard as Arithmetic](https://arxiv.org/abs/2003.05119) -- [Magic: The Gathering is Turing Complete](https://arxiv.org/abs/1904.09828) -- [Neural Networks Models for Analyzing Magic: the Gathering Cards](https://link.springer.com/chapter/10.1007/978-3-030-04179-3_20) [[arXiv](https://arxiv.org/abs/1810.03744)] -- [The Complexity of Deciding Legality of a Single Step of Magic: the Gathering](https://livrepository.liverpool.ac.uk/3029568/1/magic.pdf) -- [Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering](https://doi.org/10.1109/TCIAIG.2012.2204883) -- [Magic: The Gathering in Common Lisp](https://pdfs.semanticscholar.org/5fc8/58802f19504ea950e20e31526dc2269b43d8.pdf) [[source](https://github.com/jeffythedragonslayer/maglisp)] -- [Deck Costruction Strategies for Magic: the Gathering](https://cab.unime.it/journals/index.php/congress/article/viewFile/141/141) -- [Deckbuilding in Magic: The Gathering Using a Genetic Algorithm](http://hdl.handle.net/11250/2462429) -- [Mathematical programming and Magic: The Gathering®](https://commons.lib.niu.edu/handle/10843/19194) -- [Optimal Card-Collecting Strategies for Magic: The Gathering](https://doi.org/10.1080/07468342.2000.11974103) -- [Monte Carlo search applied to card selection in Magic: The Gathering](https://doi.org/10.1109/CIG.2009.5286501) -- [Magic: The Gathering Deck Performance Prediction](http://cs229.stanford.edu/proj2012/HauPlotkinTran-MagicTheGatheringDeckPerformancePrediction.pdf) - -# Modern Art: The card game -- [A constraint programming based solver for Modern Art](https://github.com/captn3m0/modernart) - -# Monopoly -- [Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement Learning and Imitation Learning Approach](https://arxiv.org/abs/2103.00683) -- [Negotiation strategy of agents in the MONOPOLY game](https://ieeexplore.ieee.org/abstract/document/1013210) -- [Generating interesting Monopoly boards from open data](https://ieeexplore.ieee.org/abstract/document/6374168) -- [Estimating the probability that the game of Monopoly never ends](https://ieeexplore.ieee.org/abstract/document/5429349) -- [Learning to play Monopoly:A Reinforcement Learning approach](https://www.researchgate.net/profile/Anestis_Fachantidis/publication/289403522_Learning_to_play_monopoly_A_Reinforcement_learning_approach/links/59dd1f3e458515f6efef1904/Learning-to-play-monopoly-A-Reinforcement-learning-approach.pdf) -- [Monopoly as a Markov Process](https://doi.org/10.1080/0025570X.1972.11976187) -- [Learning to Play Monopoly withMonte Carlo Tree Search](https://project-archive.inf.ed.ac.uk/ug4/20181042/ug4_proj.pdf) -- [Monopoly Using Reinforcement Learning ](https://ieeexplore.ieee.org/abstract/document/8929523) -- [A Markovian Exploration of Monopoly](https://pi4.math.illinois.edu/wp-content/uploads/2014/10/Gartland-Burson-Ferguson-Markovopoly.pdf) -- [What's the best Monopoly strategy](https://publications.lakeforest.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1277&context=gss) - -# Monopoly Deal -- [Implementation of AI Player on "Monopoly Deal"](https://doi.org/10.1007/978-3-662-46742-8_11) - -# Nmbr9 -- [Nmbr9 as a Constraint Programming Challenge](https://zayenz.se/blog/post/nmbr9-cp2019-abstract/) - -# Pandemic -- [NP-Completeness of Pandemic](https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_article) - -# Patchwork -- [State Representation and Polyomino Placement for the Game Patchwork](https://zayenz.se/blog/post/patchwork-modref2019-paper/) - -# Pentago -- [On Solving Pentago](http://www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf) - -# Quixo -- [Quixo Is Solved](https://arxiv.org/abs/2007.15895) -- [QUIXO is EXPTIME-complete](https://doi.org/10.1016/j.ipl.2020.105995) - -# Race for the Galaxy -- [SCOUT: A Case-Based Reasoning Agent for Playing Race for the Galaxy](https://doi.org/10.1007/978-3-319-61030-6_27) - -# The Resistance: Avalon -- [Finding Friend and Foe in Multi-Agent Games](https://arxiv.org/abs/1906.02330) - -# Risk - -- [Mini-Risk: Strategies for a Simplified Board Game](https://doi.org/10.1057/jors.1990.2) -- [A Multi-Agent System for playing the board game Risk](https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A831093&dswid=-4740) -- [Learning the risk board game with classifier systems](https://doi.org/10.1145/508791.508904) -- [Markov Chains and the RISK Board Game](https://doi.org/10.1080/0025570X.1997.11996573) -- [Markov Chains for the RISK Board Game Revisited](https://doi.org/10.1080/0025570X.2003.11953165) -- [RISK Board Game ‐ Battle Outcome Analysis](http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf) -- [Planning an endgame move set for the game RISK](https://doi.org/10.1109/TEVC.2005.856211) -- [RISKy Business: An In-Depth Look at the Game RISK](https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3/) -- [An Intelligent Artificial Player for the Game of Risk](http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf) -- [Monte Carlo Tree Search for Risk](https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf) [[Presentation](https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf)] - -# Santorini -- [A Mathematical Analysis of the Game of Santorini](https://openworks.wooster.edu/independentstudy/8917/) - -# Scotland Yard -- [The complexity of Scotland Yard](http://www.illc.uva.nl/Research/Publications/Reports/PP-2006-18.text.pdf) - -# Secret Hitler -- [Competing in a Complex Hidden Role Game with Information Set Monte Carlo Tree Search](https://arxiv.org/abs/2005.07156) - -# Set -Set has a long history of mathematical research, so this list isn't exhaustive. - -- [Game, Set, Math](https://doi.org/10.4169/math.mag.85.2.083) -- [The Joy of SET](https://doi.org/10.1080/00029890.2018.1412661) - -# Settlers of Catan -- [The effectiveness of persuasion in The Settlers of Catan ](https://doi.org/10.1109/CIG.2014.6932861) -- [Avoiding Revenge Using Optimal Opponent Ranking Strategy in the Board Game Catan ](https://doi.org/10.4018/IJGCMS.2018040103) -- [Game strategies for The Settlers of Catan](https://doi.org/10.1109/CIG.2014.6932884) -- [Monte-Carlo Tree Search in Settlers of Catan](https://doi.org/10.1007/978-3-642-12993-3_3) -- [Settlers of Catan bot trained using reinforcement learning (MATLAB).](https://jonzia.github.io/Catan/) -- [Trading in a multiplayer board game: Towards an analysis of non-cooperative dialogue](https://escholarship.org/content/qt9zt506xx/qt9zt506xx.pdf) -- [POMCP with Human Preferencesin Settlers of Catan](https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217) -- [Reinforcement Learning of Strategies for Settlers of Catan](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.561.6293&rep=rep1&type=pdf) -- [Deep Reinforcement Learning in Strategic Board GameEnvironments](https://doi.org/10.1007/978-3-030-14174-5_16) [[pdf](https://hal.archives-ouvertes.fr/hal-02124411/document)] -- [Monte Carlo Tree Search in a Modern Board Game Framework](https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf) -- [The impact of loaded dice in Catan](https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html) -- [Playing Catan with Cross-dimensional Neural Network](https://arxiv.org/abs/2008.07079) -- [Strategic Dialogue Management via Deep Reinforcement Learning](https://arxiv.org/abs/1511.08099) - -# Shobu -- [Shobu AI Playground](https://github.com/JayWalker512/Shobu) -- [Shobu randomly played games dataset](https://www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k) - -# Terra Mystica -- [Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game](https://arxiv.org/abs/2006.02716) - -# [Tetris Link](https://boardgamegeek.com/boardgame/93185/tetris-link) -- [A New Challenge: Approaching Tetris Link with AI](https://arxiv.org/abs/2004.00377) - -# Ticket to Ride -- [Evolving maps and decks for ticket to ride](https://doi.org/10.1145/3235765.3235813) -- [Materials for Ticket to Ride Seattle and a framework for making more game boards](https://github.com/dovinmu/ttr_generator) -- [The Difficulty of Learning Ticket to Ride](https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf) -- [AI-based Playtesting of Contemporary Board Games](https://doi.org/10.1145/3102071.3102105) [[pdf](http://game.engineering.nyu.edu/wp-content/uploads/2017/06/ticket-ride-fdg2017-camera-ready.pdf)] [[presentation](https://www.rtealwitter.com/slides/2020-JMM.pdf)] - -# Ultimate Tic-Tac-Toe -- [At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe](https://arxiv.org/abs/2006.02353) - -# Uno -- [The complexity of UNO](https://arxiv.org/abs/1003.2851) -- [UNO Is Hard, Even for a Single Player](https://doi.org/10.1007/978-3-642-13122-6_15) - -# Yahtzee -- [Optimal Solitaire Yahtzee Strategies](http://www.yahtzee.org.uk/optimal_yahtzee_TV.pdf) -- [Nearly Optimal Computer Play in Multi-player Yahtzee](https://doi.org/10.1007/978-3-642-17928-0_23) -- [Computer Strategies for Solitaire Yahtzee](https://doi.org/10.1109/CIG.2007.368089) -- [An optimal strategy for Yahtzee](http://www.cs.loyola.edu/~jglenn/research/optimal_yahtzee.pdf) -- [Yahtzee: a Large Stochastic Environment for RL Benchmarks](https://pdfs.semanticscholar.org/f5c2/e9c9b17f584f060a73036109f697ac819a23.pdf) -- [Modeling expert problem solving in a game of chance: a Yahtzee case study](https://doi.org/10.1111/1468-0394.00160) -- [Probabilites In Yahtzee](https://doi.org/10.5951/MT.75.9.0751) -- [Optimal Yahtzee performance in multi-player games](https://www.diva-portal.org/smash/get/diva2:668705/FULLTEXT01.pdf) -- [Defensive Yahtzee](https://www.diva-portal.org/smash/get/diva2:817838/FULLTEXT01.pdf) -- [Using Deep Q-Learning to Compare Strategy Ladders of Yahtzee](https://pdfs.semanticscholar.org/6bec/1c34c8ace65adc95d39cb0c0e589ae392678.pdf) -- [How to Maximize Your Score in Solitaire Yahtzee](http://www-set.win.tue.nl/~wstomv/misc/yahtzee/yahtzee-report-unfinished.pdf) +# Magic: The Gathering +- [Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering](https://doi.org/10.1109%2Ftciaig.2012.2204883) (journalArticle) +- [Optimal Card-Collecting Strategies for Magic: The Gathering](https://doi.org/10.1080%2F07468342.2000.11974103) (journalArticle) +- [Monte Carlo search applied to card selection in Magic: The Gathering](https://doi.org/10.1109%2Fcig.2009.5286501) (conferencePaper) +- [Magic: the Gathering is as Hard as Arithmetic](http://arxiv.org/abs/2003.05119v1) (journalArticle) +- [Magic: The Gathering is Turing Complete](http://arxiv.org/abs/1904.09828v2) (journalArticle) +- [Neural Networks Models for Analyzing Magic: the Gathering Cards](http://arxiv.org/abs/1810.03744v1) (journalArticle) # Mobile Games -- [Trainyard is NP-Hard](https://arxiv.org/abs/1603.00928) -- [Threes!, Fives, 1024!, and 2048 are Hard](https://arxiv.org/abs/1505.04274) +- [2048]() () +- [Trainyard is NP-Hard](http://arxiv.org/abs/1603.00928v1) (journalArticle) +- [Threes!, Fives, 1024!, and 2048 are Hard](http://arxiv.org/abs/1505.04274v1) (journalArticle) -## 2048 -- [Making Change in 2048](https://arxiv.org/abs/1804.07396) -- [Analysis of the Game "2048" and its Generalization in Higher Dimensions](https://arxiv.org/abs/1804.07393) -- [Temporal difference learning of N-tuple networks for the game 2048](https://ieeexplore.ieee.org/abstract/document/6932907) -- [Multi-Stage Temporal Difference Learning for 2048-like Games](https://arxiv.org/abs/1606.07374) -- [On the Complexity of Slide-and-Merge Games](https://arxiv.org/abs/1501.03837) -- [2048 is (PSPACE) Hard, but Sometimes Eas](https://arxiv.org/abs/1408.6315) -- [Systematic Selection of N-Tuple Networks for 2048](https://doi.org/10.1007/978-3-319-50935-8_8) -- [Systematic selection of N-tuple networks with consideration of interinfluence for game 2048](https://doi.org/10.1109/TAAI.2016.7880154) -- [2048 Without New Tiles Is Still Hard](https://drops.dagstuhl.de/opus/volltexte/2016/5885/) -- [An investigation into 2048 AI strategies](https://doi.org/10.1109/CIG.2014.6932920) +# 2048 +- [Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering](https://doi.org/10.1109%2Ftciaig.2012.2204883) (journalArticle) +- [Systematic Selection of N-Tuple Networks for 2048](https://doi.org/10.1007%2F978-3-319-50935-8_8) (bookSection) +- [Systematic selection of N-tuple networks with consideration of interinfluence for game 2048](https://doi.org/10.1109%2Ftaai.2016.7880154) (conferencePaper) +- [An investigation into 2048 AI strategies](https://doi.org/10.1109%2Fcig.2014.6932920) (conferencePaper) +- [Threes!, Fives, 1024!, and 2048 are Hard](http://arxiv.org/abs/1505.04274v1) (journalArticle) +- [Making Change in 2048](http://arxiv.org/abs/1804.07396v1) (journalArticle) +- [Analysis of the Game "2048" and its Generalization in Higher Dimensions](http://arxiv.org/abs/1804.07393v2) (journalArticle) +- [Multi-Stage Temporal Difference Learning for 2048-like Games](http://arxiv.org/abs/1606.07374v2) (journalArticle) +- [2048 is (PSPACE) Hard, but Sometimes Easy](http://arxiv.org/abs/1408.6315v1) (journalArticle) +- [Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering](https://doi.org/10.1109%2Ftciaig.2012.2204883) (journalArticle) +- [Systematic Selection of N-Tuple Networks for 2048](https://doi.org/10.1007%2F978-3-319-50935-8_8) (bookSection) +- [Systematic selection of N-tuple networks with consideration of interinfluence for game 2048](https://doi.org/10.1109%2Ftaai.2016.7880154) (conferencePaper) +- [An investigation into 2048 AI strategies](https://doi.org/10.1109%2Fcig.2014.6932920) (conferencePaper) +- [Threes!, Fives, 1024!, and 2048 are Hard](http://arxiv.org/abs/1505.04274v1) (journalArticle) +- [Making Change in 2048](http://arxiv.org/abs/1804.07396v1) (journalArticle) +- [Analysis of the Game "2048" and its Generalization in Higher Dimensions](http://arxiv.org/abs/1804.07393v2) (journalArticle) +- [Multi-Stage Temporal Difference Learning for 2048-like Games](http://arxiv.org/abs/1606.07374v2) (journalArticle) +- [2048 is (PSPACE) Hard, but Sometimes Easy](http://arxiv.org/abs/1408.6315v1) (journalArticle) -# Game Design -- [MDA: A Formal Approach to Game Design and Game Research ](https://www.aaai.org/Papers/Workshops/2004/WS-04-04/WS04-04-001.pdf) -- [Exploring Anonymity in Cooperative Board Games](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.225.5554&rep=rep1&type=pdf) +# Monopoly +- [Monopoly as a Markov Process](https://doi.org/10.1080%2F0025570x.1972.11976187) (journalArticle) -## Accessibility -- [Eighteen Months of Meeple Like Us: An Exploration into the State of Board Game Accessibility](https://doi.org/10.1007/s40869-018-0056-9) -- [Meeple Centred Design: A Heuristic Toolkit for Evaluating the Accessibility of Tabletop Games](https://doi.org/10.1007/s40869-018-0057-8) +# Monopoly Deal +- [Implementation of Artificial Intelligence with 3 Different Characters of AI Player on “Monopoly Deal” Computer Game](https://doi.org/10.1007%2F978-3-662-46742-8_11) (bookSection) -# Frameworks/Toolkits -- [RLCard: A Toolkit for Reinforcement Learning in Card Games](https://arxiv.org/abs/1910.04376) -- [GTSA: Game Tree Search Algorithms](https://github.com/AdamStelmaszczyk/gtsa) -- [Design and Implementation of TAG: A Tabletop Games Framework](https://arxiv.org/abs/2009.12065) [[GitHub](https://github.com/GAIGResearch/TabletopGames)] -- [TAG: Tabletop Games Framework](https://github.com/GAIGResearch/TabletopGames) \ No newline at end of file +# Nmbr9 +- [Nmbr9 as a Constraint Programming Challenge](http://arxiv.org/abs/2001.04238) (journalArticle) +- [Nmbr9 as a Constraint Programming Challenge](https://zayenz.se/blog/post/nmbr9-cp2019-abstract/) (blogPost) + +# Patchwork +- [State Representation and Polyomino Placement for the Game Patchwork](https://zayenz.se/blog/post/patchwork-modref2019-paper/) (blogPost) +- [State Representation and Polyomino Placement for the Game Patchwork](http://arxiv.org/abs/2001.04233) (journalArticle) +- [State Representation and Polyomino Placement for the Game Patchwork](https://zayenz.se/papers/Lagerkvist_ModRef_2019_Presentation.pdf) (presentation) + +# Quixo +- [QUIXO is EXPTIME-complete](https://doi.org/10.1016%2Fj.ipl.2020.105995) (journalArticle) +- [Quixo Is Solved](http://arxiv.org/abs/2007.15895) (journalArticle) + +# Race for the Galaxy +- [SCOUT: A Case-Based Reasoning Agent for Playing Race for the Galaxy](https://doi.org/10.1007%2F978-3-319-61030-6_27) (bookSection) + +# RISK +- [Mini-Risk: Strategies for a Simplified Board Game](https://doi.org/10.1057%2Fjors.1990.2) (journalArticle) +- [Learning the risk board game with classifier systems](https://doi.org/10.1145%2F508791.508904) (conferencePaper) +- [Markov Chains and the RISK Board Game](https://doi.org/10.1080%2F0025570x.1997.11996573) (journalArticle) +- [Markov Chains for the RISK Board Game Revisited](https://doi.org/10.1080%2F0025570x.2003.11953165) (journalArticle) +- [Planning an Endgame Move Set for the Game RISK: A Comparison of Search Algorithms](https://doi.org/10.1109%2Ftevc.2005.856211) (journalArticle) +- [An Intelligent Artificial Player for the Game of Risk](http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf) (presentation) +- [RISKy Business: An In-Depth Look at the Game RISK](https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3) (journalArticle) +- [RISK Board Game ‐ Battle Outcome Analysis](http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf) (journalArticle) +- [A multi-agent system for playing the board game risk]() (book) + +# Secret Hitler +- [Competing in a Complex Hidden Role Game with Information Set Monte Carlo Tree Search](http://arxiv.org/abs/2005.07156) (journalArticle) + +# Set +- [Game, Set, Math](https://doi.org/10.4169%2Fmath.mag.85.2.083) (journalArticle) +- [The Joy of SET](https://doi.org/10.1080%2F00029890.2018.1412661) (journalArticle) + +# Settlers of Catan +- [The effectiveness of persuasion in The Settlers of Catan](http://ieeexplore.ieee.org/document/6932861/) (conferencePaper) +- [The effectiveness of persuasion in The Settlers of Catan](https://doi.org/10.1109%2Fcig.2014.6932861) (conferencePaper) +- [Avoiding Revenge Using Optimal Opponent Ranking Strategy in the Board Game Catan](https://doi.org/10.4018%2Fijgcms.2018040103) (journalArticle) +- [Game strategies for The Settlers of Catan](https://doi.org/10.1109%2Fcig.2014.6932884) (conferencePaper) +- [Monte-Carlo Tree Search in Settlers of Catan](https://doi.org/10.1007%2F978-3-642-12993-3_3) (bookSection) +- [Deep Reinforcement Learning in Strategic Board Game Environments](https://doi.org/10.1007%2F978-3-030-14174-5_16) (bookSection) +- [Settlers of Catan bot trained using reinforcement learning](https://jonzia.github.io/Catan/) (computerProgram) +- [Trading in a multiplayer board game: Towards an analysis of non-cooperative dialogue]() (conferencePaper) +- [POMCP with Human Preferencesin Settlers of Catan](https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217) (journalArticle) +- [The impact of loaded dice in Catan](https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html) (blogPost) +- [Monte Carlo Tree Search in a Modern Board Game Framework](https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf) (journalArticle) +- [Reinforcement Learning of Strategies for Settlers of Catan]() (book) +- [Playing Catan with Cross-dimensional Neural Network](http://arxiv.org/abs/2008.07079) (journalArticle) + +# Shobu +- [Shobu AI Playground](https://github.com/JayWalker512/Shobu) (computerProgram) +- [Shobu randomly played games dataset](https://www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k) (webpage) + +# Terra Mystica +- [Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game](https://doi.org/10.1145%2F3396474.3396492) (conferencePaper) + +# Tetris Link +- [A New Challenge: Approaching Tetris Link with AI](http://arxiv.org/abs/2004.00377) (journalArticle) + +# Ticket to Ride +- [AI-based playtesting of contemporary board games](http://dl.acm.org/citation.cfm?doid=3102071.3102105) (conferencePaper) +- [Materials for Ticket to Ride Seattle and a framework for making more game boards](https://github.com/dovinmu/ttr_generator) (computerProgram) +- [https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf](https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf) (report) +- [Evolving maps and decks for ticket to ride](https://dl.acm.org/doi/10.1145/3235765.3235813) (conferencePaper) +- [Applications of Graph Theory andProbability in the Board GameTicket toRide](https://www.rtealwitter.com/slides/2020-JMM.pdf) (presentation) + +# Ultimate Tic-Tac-Toe +- [At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe](http://arxiv.org/abs/2006.02353) (journalArticle) + +# UNO +- [UNO Is Hard, Even for a Single Player](https://doi.org/10.1007%2F978-3-642-13122-6_15) (bookSection) +- [The complexity of UNO](http://arxiv.org/abs/1003.2851v3) (journalArticle) + +# Yahtzee +- [Nearly Optimal Computer Play in Multi-player Yahtzee](https://doi.org/10.1007%2F978-3-642-17928-0_23) (bookSection) +- [Computer Strategies for Solitaire Yahtzee](https://doi.org/10.1109%2Fcig.2007.368089) (conferencePaper) +- [Modeling expert problem solving in a game of chance: a Yahtzeec case study](https://doi.org/10.1111%2F1468-0394.00160) (journalArticle) \ No newline at end of file diff --git a/links.txt b/links.txt new file mode 100644 index 0000000..2fef47a --- /dev/null +++ b/links.txt @@ -0,0 +1,170 @@ +citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.225.5554&rep=rep1&type=pdf +citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.561.6293&rep=rep1&type=pdf +councilroom.com/openings +councilroom.com/optimal_card_ratios +councilroom.com/supply_win +cs229.stanford.edu/proj2012/HauPlotkinTran-MagicTheGatheringDeckPerformancePrediction.pdf +digital.library.wisc.edu/1793/79080 +fdg2017.org/papers/FDG2017_demo_Hanabi.pdf +forum.dominionstrategy.com/index.php +game.engineering.nyu.edu/wp-content/uploads/2017/06/ticket-ride-fdg2017-camera-ready.pdf +hdl.handle.net/11250/2462429 +id.nii.ac.jp/1001/00205046/ +www-set.win.tue.nl/~wstomv/misc/yahtzee/yahtzee-report-unfinished.pdf +www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf +www.cs.loyola.edu/~jglenn/research/optimal_yahtzee.pdf +www.exag.org/wp-content/uploads/2018/10/AIIDE-18_Upload_112.pdf +www.illc.uva.nl/Research/Publications/Reports/PP-2006-18.text.pdf +www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf +www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf +www.yahtzee.org.uk/optimal_yahtzee_TV.pdf +arxiv.org/abs/0804.0071 +arxiv.org/abs/1003.2851 +arxiv.org/abs/1009.1031 +arxiv.org/abs/1408.6315 +arxiv.org/abs/1501.03837 +arxiv.org/abs/1505.04274 +arxiv.org/abs/1511.08099 +arxiv.org/abs/1603.00928 +arxiv.org/abs/1603.01911 +arxiv.org/abs/1606.07374 +arxiv.org/abs/1708.01503 +arxiv.org/abs/1804.07393 +arxiv.org/abs/1804.07396 +arxiv.org/abs/1807.04458 +arxiv.org/abs/1810.03744 +arxiv.org/abs/1811.11273 +arxiv.org/abs/1902.00506 +arxiv.org/abs/1902.06996 +arxiv.org/abs/1903.02230 +arxiv.org/abs/1904.09828 +arxiv.org/abs/1905.08617 +arxiv.org/abs/1906.02330 +arxiv.org/abs/1909.02128 +arxiv.org/abs/1909.02849 +arxiv.org/abs/1910.04376 +arxiv.org/abs/1912.02318 +arxiv.org/abs/2003.05119 +arxiv.org/abs/2004.00377 +arxiv.org/abs/2004.13291 +arxiv.org/abs/2004.13710 +arxiv.org/abs/2005.07156 +arxiv.org/abs/2006.02353 +arxiv.org/abs/2006.02716 +arxiv.org/abs/2006.04635 +arxiv.org/abs/2007.15895 +arxiv.org/abs/2008.07079 +arxiv.org/abs/2009.12065 +arxiv.org/abs/2009.12974 +arxiv.org/abs/2010.00048 +arxiv.org/abs/2010.02923 +arxiv.org/abs/2103.00683 +arxiv.org/pdf/2004.13710v2.pdf +boardgamegeek.com/boardgame/93185/tetris-link +cab.unime.it/journals/index.php/congress/article/viewFile/141/141 +commons.lib.niu.edu/handle/10843/19194 +digitalcommons.calpoly.edu/cpesp/290/ +dl.acm.org/doi/abs/10.5555/3461017.3461244 +doi.org/10.1007/978-3-030-14174-5_16 +doi.org/10.1007/978-3-319-50935-8_8 +doi.org/10.1007/978-3-319-61030-6_27 +doi.org/10.1007/978-3-642-12993-3_3 +doi.org/10.1007/978-3-642-13122-6_15 +doi.org/10.1007/978-3-642-17928-0_23 +doi.org/10.1007/978-3-662-46742-8_11 +doi.org/10.1007/s40869-018-0056-9 +doi.org/10.1007/s40869-018-0057-8 +doi.org/10.1016/j.artint.2019.103216 +doi.org/10.1016/j.ipl.2020.105995 +doi.org/10.1057/jors.1990.2 +doi.org/10.1080/00029890.2018.1412661 +doi.org/10.1080/0025570X.1972.11976187 +doi.org/10.1080/0025570X.1997.11996573 +doi.org/10.1080/0025570X.2003.11953165 +doi.org/10.1080/07468342.2000.11974103 +doi.org/10.1109/CADS.2013.6714256 +doi.org/10.1109/CEC.2017.7969465 +doi.org/10.1109/CIG.2007.368089 +doi.org/10.1109/CIG.2009.5286501 +doi.org/10.1109/CIG.2014.6932861 +doi.org/10.1109/CIG.2014.6932884 +doi.org/10.1109/CIG.2014.6932920 +doi.org/10.1109/CIG.2018.8490419 +doi.org/10.1109/CIG.2019.8847944 +doi.org/10.1109/CIG.2019.8848008 +doi.org/10.1109/FPT.2013.6718426 +doi.org/10.1109/TAAI.2016.7880154 +doi.org/10.1109/TCIAIG.2012.2204883 +doi.org/10.1109/TEVC.2005.856211 +doi.org/10.1109/TG.2020.3009359 +doi.org/10.1111/1468-0394.00160 +doi.org/10.1145/3102071.3102105 +doi.org/10.1145/3235765.3235813 +doi.org/10.1145/508791.508904 +doi.org/10.4018/IJGCMS.2018040103 +doi.org/10.4169/math.mag.85.2.083 +doi.org/10.4169/math.mag.88.5.323 +doi.org/10.5951/MT.75.9.0751 +dominionsimulator.wordpress.com/f-a-q/ +drops.dagstuhl.de/opus/volltexte/2016/5885/ +dspace.library.uu.nl/handle/1874/396955 +escholarship.org/content/qt9zt506xx/qt9zt506xx.pdf +github.com/AdamStelmaszczyk/gtsa +github.com/aronsar/hoad +github.com/captn3m0/modernart +github.com/dovinmu/ttr_generator +github.com/facebookresearch/Hanabi_SPARTA +github.com/GAIGResearch/TabletopGames +github.com/GAIGResearch/TabletopGames +github.com/JayWalker512/Shobu +github.com/jeffythedragonslayer/maglisp +github.com/mikemccllstr/dominionstats/ +github.com/Quuxplusone/Hanabi +github.com/rjtobin/HanSim +github.com/Swynfel/ceramic +github.com/WuTheFWasThat/hanabi.rs +hal.archives-ouvertes.fr/hal-02124411/document +ieee-cog.org/2020/papers2019/paper_17.pdf +ieeexplore.ieee.org/abstract/document/1013210 +ieeexplore.ieee.org/abstract/document/5429349 +ieeexplore.ieee.org/abstract/document/6374168 +ieeexplore.ieee.org/abstract/document/6932907 +ieeexplore.ieee.org/abstract/document/8080417 +ieeexplore.ieee.org/abstract/document/8490449 +ieeexplore.ieee.org/abstract/document/8929523 +ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=207669&item_no=1&attribute_id=1&file_no=1 +izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html +jonzia.github.io/Catan/ +link.springer.com/chapter/10.1007/978-3-030-04179-3_20 +link.springer.com/chapter/10.1007/978-3-319-50935-8_9 +link.springer.com/chapter/10.1007/978-3-319-67468-1_7 +link.springer.com/chapter/10.1007/978-3-319-71649-7_5 +livrepository.liverpool.ac.uk/3029568/1/magic.pdf +ojs.aaai.org//index.php/AIIDE/article/view/7404 +ojs.aaai.org/index.php/AIIDE/article/view/7404/7333 +old.reddit.com/r/boardgames/comments/hxodaf/update_i_wrote_my_dissertation_on_azul/ +openworks.wooster.edu/independentstudy/8917/ +pdfs.semanticscholar.org/5fc8/58802f19504ea950e20e31526dc2269b43d8.pdf +pdfs.semanticscholar.org/6bec/1c34c8ace65adc95d39cb0c0e589ae392678.pdf +pdfs.semanticscholar.org/f5c2/e9c9b17f584f060a73036109f697ac819a23.pdf +pi4.math.illinois.edu/wp-content/uploads/2014/10/Gartland-Burson-Ferguson-Markovopoly.pdf +project-archive.inf.ed.ac.uk/ug4/20181042/ug4_proj.pdf +project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf +publications.lakeforest.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1277&context=gss +scholar.rose-hulman.edu/rhumj/vol3/iss2/3/ +www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217 +www.aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167/10193 +www.aaai.org/Papers/Workshops/2004/WS-04-04/WS04-04-001.pdf +www.diva-portal.org/smash/get/diva2:668705/FULLTEXT01.pdf +www.diva-portal.org/smash/get/diva2:817838/FULLTEXT01.pdf +www.diva-portal.org/smash/record.jsf?pid=diva2%3A831093&dswid=-4740 +www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf +www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_article +www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k +www.llpjournal.org/2020/04/13/jidokan-jenga.html +www.researchgate.net/profile/Anestis_Fachantidis/publication/289403522_Learning_to_play_monopoly_A_Reinforcement_learning_approach/links/59dd1f3e458515f6efef1904/Learning-to-play-monopoly-A-Reinforcement-learning-approach.pdf +www.rtealwitter.com/slides/2020-JMM.pdf +www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf +www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf +zayenz.se/blog/post/nmbr9-cp2019-abstract/ +zayenz.se/blog/post/patchwork-modref2019-paper/ diff --git a/to-markdown.xsl b/to-markdown.xsl new file mode 100644 index 0000000..ff60f38 --- /dev/null +++ b/to-markdown.xsl @@ -0,0 +1,41 @@ + + + + + + + + + # + + + - [ + + ]( + + ) ( + + ) + + + + + + + +