boardgame-research/boardgame-research-bookmark...

84 lines
9.1 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE NETSCAPE-Bookmark-file-1>
<!-- This is an automatically generated file.
It will be read and overwritten.
DO NOT EDIT! -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks Menu</H1>
<DL>
<DT><A HREF="http://ieeexplore.ieee.org/document/6932861/">The effectiveness of persuasion in The Settlers of Catan</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2014.6932861">The effectiveness of persuasion in The Settlers of Catan</A>
<DT><A HREF="https://doi.org/10.4018%2Fijgcms.2018040103">Avoiding Revenge Using Optimal Opponent Ranking Strategy in the Board Game Catan</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2014.6932884">Game strategies for The Settlers of Catan</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-642-12993-3_3">Monte-Carlo Tree Search in Settlers of Catan</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-030-14174-5_16">Deep Reinforcement Learning in Strategic Board Game Environments</A>
<DT><A HREF="https://doi.org/10.1057%2Fjors.1990.2">Mini-Risk: Strategies for a Simplified Board Game</A>
<DT><A HREF="https://doi.org/10.1145%2F508791.508904">Learning the risk board game with classifier systems</A>
<DT><A HREF="https://doi.org/10.1080%2F0025570x.1997.11996573">Markov Chains and the RISK Board Game</A>
<DT><A HREF="https://doi.org/10.1080%2F0025570x.2003.11953165">Markov Chains for the RISK Board Game Revisited</A>
<DT><A HREF="https://doi.org/10.1109%2Ftevc.2005.856211">Planning an Endgame Move Set for the Game RISK: A Comparison of Search Algorithms</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2018.8490419">Monte Carlo Methods for the Game Kingdomino</A>
<DT><A HREF="https://doi.org/10.4169%2Fmath.mag.88.5.323">How to Make the Perfect Fireworks Display: Two Strategies forHanabi</A>
<DT><A HREF="https://doi.org/10.1109%2Fcec.2017.7969465">Evaluating and modelling Hanabi-playing agents</A>
<DT><A HREF="https://doi.org/10.1016%2Fj.artint.2019.103216">The Hanabi challenge: A new frontier for AI research</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2019.8848008">The 2018 Hanabi competition</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2019.8847944">Diverse Agents for Ad-Hoc Cooperation in Hanabi</A>
<DT><A HREF="https://doi.org/10.1080%2F0025570x.1972.11976187">Monopoly as a Markov Process</A>
<DT><A HREF="https://doi.org/10.1109%2Ftciaig.2012.2204883">Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering</A>
<DT><A HREF="https://doi.org/10.1080%2F07468342.2000.11974103">Optimal Card-Collecting Strategies for Magic: The Gathering</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2009.5286501">Monte Carlo search applied to card selection in Magic: The Gathering</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-642-13122-6_15">UNO Is Hard, Even for a Single Player</A>
<DT><A HREF="https://doi.org/10.1016%2Fj.ipl.2020.105995">QUIXO is EXPTIME-complete</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-319-61030-6_27">SCOUT: A Case-Based Reasoning Agent for Playing Race for the Galaxy</A>
<DT><A HREF="https://doi.org/10.4169%2Fmath.mag.85.2.083">Game, Set, Math</A>
<DT><A HREF="https://doi.org/10.1080%2F00029890.2018.1412661">The Joy of SET</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-662-46742-8_11">Implementation of Artificial Intelligence with 3 Different Characters of AI Player on “Monopoly Deal” Computer Game</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-642-17928-0_23">Nearly Optimal Computer Play in Multi-player Yahtzee</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2007.368089">Computer Strategies for Solitaire Yahtzee</A>
<DT><A HREF="https://doi.org/10.1111%2F1468-0394.00160">Modeling expert problem solving in a game of chance: a Yahtzeec case study</A>
<DT><A HREF="https://doi.org/10.1007%2F978-3-319-50935-8_8">Systematic Selection of N-Tuple Networks for 2048</A>
<DT><A HREF="https://doi.org/10.1109%2Ftaai.2016.7880154">Systematic selection of N-tuple networks with consideration of interinfluence for game 2048</A>
<DT><A HREF="https://doi.org/10.1109%2Fcig.2014.6932920">An investigation into 2048 AI strategies</A>
<DT><A HREF="http://arxiv.org/abs/2006.04635v2">Learning to Play No-Press Diplomacy with Best Response Policy Iteration</A>
<DT><A HREF="http://arxiv.org/abs/1909.02128v2">No Press Diplomacy: Modeling Multi-Agent Gameplay</A>
<DT><A HREF="http://arxiv.org/abs/1902.06996v1">Agent Madoff: A Heuristic-Based Negotiation Agent For The Diplomacy Strategy Game</A>
<DT><A HREF="http://arxiv.org/abs/1807.04458v2">Monte Carlo Methods for the Game Kingdomino</A>
<DT><A HREF="http://arxiv.org/abs/1909.02849v3">NP-completeness of the game Kingdomino</A>
<DT><A HREF="http://arxiv.org/abs/1912.02318v1">Improving Policies via Search in Cooperative Partially Observable Games</A>
<DT><A HREF="http://arxiv.org/abs/1603.01911v3">Hanabi is NP-hard, Even for Cheaters who Look at Their Cards</A>
<DT><A HREF="http://arxiv.org/abs/2004.13710v2">Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi</A>
<DT><A HREF="http://arxiv.org/abs/2004.13291v1">Evaluating the Rainbow DQN Agent in Hanabi with Unseen Partners</A>
<DT><A HREF="http://arxiv.org/abs/2003.05119v1">Magic: the Gathering is as Hard as Arithmetic</A>
<DT><A HREF="http://arxiv.org/abs/1904.09828v2">Magic: The Gathering is Turing Complete</A>
<DT><A HREF="http://arxiv.org/abs/1810.03744v1">Neural Networks Models for Analyzing Magic: the Gathering Cards</A>
<DT><A HREF="https://doi.org/10.1145%2F3396474.3396492">Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game</A>
<DT><A HREF="http://arxiv.org/abs/1009.1031v3">A mathematical model of the Mafia game</A>
<DT><A HREF="http://arxiv.org/abs/1003.2851v3">The complexity of UNO</A>
<DT><A HREF="http://arxiv.org/abs/1603.00928v1">Trainyard is NP-Hard</A>
<DT><A HREF="http://arxiv.org/abs/1505.04274v1">Threes!, Fives, 1024!, and 2048 are Hard</A>
<DT><A HREF="http://arxiv.org/abs/1804.07396v1">Making Change in 2048</A>
<DT><A HREF="http://arxiv.org/abs/1804.07393v2">Analysis of the Game "2048" and its Generalization in Higher Dimensions</A>
<DT><A HREF="http://arxiv.org/abs/1606.07374v2">Multi-Stage Temporal Difference Learning for 2048-like Games</A>
<DT><A HREF="http://arxiv.org/abs/1408.6315v1">2048 is (PSPACE) Hard, but Sometimes Easy</A>
<DT><A HREF="https://jonzia.github.io/Catan/">Settlers of Catan bot trained using reinforcement learning</A>
<DT><A HREF="https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217">POMCP with Human Preferencesin Settlers of Catan</A>
<DT><A HREF="https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html">The impact of loaded dice in Catan</A>
<DT><A HREF="https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf">Monte Carlo Tree Search in a Modern Board Game Framework</A>
<DT><A HREF="http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf">An Intelligent Artificial Player for the Game of Risk</A>
<DT><A HREF="https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3">RISKy Business: An In-Depth Look at the Game RISK</A>
<DT><A HREF="http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf">RISK Board Game Battle Outcome Analysis</A>
<DT><A HREF="https://zayenz.se/blog/post/patchwork-modref2019-paper/">State Representation and Polyomino Placement for the Game Patchwork</A>
<DT><A HREF="http://arxiv.org/abs/2001.04233">State Representation and Polyomino Placement for the Game Patchwork</A>
<DT><A HREF="https://zayenz.se/papers/Lagerkvist_ModRef_2019_Presentation.pdf">State Representation and Polyomino Placement for the Game Patchwork</A>
<DT><A HREF="http://arxiv.org/abs/2001.04238">Nmbr9 as a Constraint Programming Challenge</A>
<DT><A HREF="https://zayenz.se/blog/post/nmbr9-cp2019-abstract/">Nmbr9 as a Constraint Programming Challenge</A>
<DT><A HREF="https://ieeexplore.ieee.org/document/8490449/">Evolving Agents for the Hanabi 2018 CIG Competition</A>
<DT><A HREF="http://link.springer.com/10.1007/978-3-319-67468-1_7">Aspects of the Cooperative Card Game Hanabi</A>
<DT><A HREF="http://link.springer.com/10.1007/978-3-319-71649-7_5">Playing Hanabi Near-Optimally</A>
<DT><A HREF="http://ieeexplore.ieee.org/document/8080417/">An intentional AI for hanabi</A>
<DT><A HREF="https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167">Solving Hanabi: Estimating Hands by Opponent's Actions in Cooperative Game with Incomplete Information</A>
<DT><A HREF="http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf">A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs</A>
<DT><A HREF="https://github.com/WuTheFWasThat/hanabi.rs">State of the art Hanabi bots + simulation framework in rust</A>
<DT><A HREF="https://github.com/rjtobin/HanSim">A strategy simulator for the well-known cooperative card game Hanabi</A>
<DT><A HREF="https://github.com/Quuxplusone/Hanabi">A framework for writing bots that play Hanabi</A>
</DL>