Quixo is solved 🎉

This commit is contained in:
Nemo 2021-01-02 23:55:40 +05:30
parent 87d5c59665
commit b5866022bd
3 changed files with 474 additions and 0 deletions

View File

@ -67,6 +67,9 @@ If you aren't able to access any paper on this list, please [try using Sci-Hub](
# Blokus
- [Blokus Game Solver](https://digitalcommons.calpoly.edu/cpesp/290/)
# Carcassonne
- [Playing Carcassonne with Monte Carlo Tree Search](https://arxiv.org/abs/2009.12974)
# Diplomacy
- [Learning to Play No-Press Diplomacy with Best Response Policy Iteration ](https://arxiv.org/abs/2006.04635)
- [No Press Diplomacy: Modeling Multi-Agent Gameplay ](https://arxiv.org/abs/1909.02128)
@ -170,6 +173,7 @@ There is a [simulator](https://dominionsimulator.wordpress.com/f-a-q/) and the c
- [On Solving Pentago](http://www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf)
# Quixo
- [Quixo Is Solved](https://arxiv.org/abs/2007.15895)
- [QUIXO is EXPTIME-complete](https://doi.org/10.1016/j.ipl.2020.105995)
# Race for the Galaxy
@ -216,6 +220,7 @@ Set has a long history of mathematical research, so this list isn't exhaustive.
- [Deep Reinforcement Learning in Strategic Board GameEnvironments](https://doi.org/10.1007/978-3-030-14174-5_16) [[pdf](https://hal.archives-ouvertes.fr/hal-02124411/document)]
- [Monte Carlo Tree Search in a Modern Board Game Framework](https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf)
- [The impact of loaded dice in Catan](https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html)
- [Playing Catan with Cross-dimensional Neural Network](https://arxiv.org/abs/2008.07079)
# Shobu
- [Shobu AI Playground](https://github.com/JayWalker512/Shobu)
@ -224,12 +229,18 @@ Set has a long history of mathematical research, so this list isn't exhaustive.
# Terra Mystica
- [Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game](https://arxiv.org/abs/2006.02716)
# [Tetris Link](https://boardgamegeek.com/boardgame/93185/tetris-link)
- [A New Challenge: Approaching Tetris Link with AI](https://arxiv.org/abs/2004.00377)
# Ticket to Ride
- [Evolving maps and decks for ticket to ride](https://doi.org/10.1145/3235765.3235813)
- [Materials for Ticket to Ride Seattle and a framework for making more game boards](https://github.com/dovinmu/ttr_generator)
- [The Difficulty of Learning Ticket to Ride](https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf)
- [AI-based Playtesting of Contemporary Board Games](https://doi.org/10.1145/3102071.3102105) [[pdf](http://game.engineering.nyu.edu/wp-content/uploads/2017/06/ticket-ride-fdg2017-camera-ready.pdf)] [[presentation](https://www.rtealwitter.com/slides/2020-JMM.pdf)]
# Ultimate Tic-Tac-Toe
- [At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe](https://arxiv.org/abs/2006.02353)
# Uno
- [The complexity of UNO](https://arxiv.org/abs/1003.2851)
- [UNO Is Hard, Even for a Single Player](https://doi.org/10.1007/978-3-642-13122-6_15)
@ -275,3 +286,4 @@ Set has a long history of mathematical research, so this list isn't exhaustive.
# Frameworks/Toolkits
- [RLCard: A Toolkit for Reinforcement Learning in Card Games](https://arxiv.org/abs/1910.04376)
- [GTSA: Game Tree Search Algorithms](https://github.com/AdamStelmaszczyk/gtsa)
- [Design and Implementation of TAG: A Tabletop Games Framework](https://arxiv.org/abs/2009.12065) [[GitHub](https://github.com/GAIGResearch/TabletopGames)]

View File

@ -1083,3 +1083,77 @@
note = {Issue: 1},
pages = {41--42}
}
@article{reinhardt_competing_2020,
title = {Competing in a {Complex} {Hidden} {Role} {Game} with {Information} {Set} {Monte} {Carlo} {Tree} {Search}},
url = {http://arxiv.org/abs/2005.07156},
abstract = {Advances in intelligent game playing agents have led to successes in perfect information games like Go and imperfect information games like Poker. The Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms outperforms previous algorithms using Monte Carlo methods in imperfect information games. In this paper, Single Observer Information Set Monte Carlo Tree Search (SO-ISMCTS) is applied to Secret Hitler, a popular social deduction board game that combines traditional hidden role mechanics with the randomness of a card deck. This combination leads to a more complex information model than the hidden role and card deck mechanics alone. It is shown in 10108 simulated games that SO-ISMCTS plays as well as simpler rule based agents, and demonstrates the potential of ISMCTS algorithms in complicated information set domains.},
urldate = {2020-11-26},
journal = {arXiv:2005.07156 [cs]},
author = {Reinhardt, Jack},
month = may,
year = {2020},
note = {arXiv: 2005.07156},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Multiagent Systems},
file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/D7TPSJ4Q/Reinhardt - 2020 - Competing in a Complex Hidden Role Game with Infor.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/TZ64EN6T/2005.html:text/html}
}
@article{ameneyro_playing_2020,
title = {Playing {Carcassonne} with {Monte} {Carlo} {Tree} {Search}},
url = {http://arxiv.org/abs/2009.12974},
abstract = {Monte Carlo Tree Search (MCTS) is a relatively new sampling method with multiple variants in the literature. They can be applied to a wide variety of challenging domains including board games, video games, and energy-based problems to mention a few. In this work, we explore the use of the vanilla MCTS and the MCTS with Rapid Action Value Estimation (MCTS-RAVE) in the game of Carcassonne, a stochastic game with a deceptive scoring system where limited research has been conducted. We compare the strengths of the MCTS-based methods with the Star2.5 algorithm, previously reported to yield competitive results in the game of Carcassonne when a domain-specific heuristic is used to evaluate the game states. We analyse the particularities of the strategies adopted by the algorithms when they share a common reward system. The MCTS-based methods consistently outperformed the Star2.5 algorithm given their ability to find and follow long-term strategies, with the vanilla MCTS exhibiting a more robust game-play than the MCTS-RAVE.},
urldate = {2021-01-02},
journal = {arXiv:2009.12974 [cs]},
author = {Ameneyro, Fred Valdez and Galvan, Edgar and Morales, Anger Fernando Kuri},
month = oct,
year = {2020},
note = {arXiv: 2009.12974},
keywords = {Computer Science - Artificial Intelligence},
annote = {Comment: 8 pages, 6 figures},
file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/KWUZF6UF/Ameneyro et al. - 2020 - Playing Carcassonne with Monte Carlo Tree Search.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/KGFBVHU7/2009.html:text/html}
}
@article{tanaka_quixo_2020,
title = {Quixo {Is} {Solved}},
url = {http://arxiv.org/abs/2007.15895},
abstract = {Quixo is a two-player game played on a 5\${\textbackslash}times\$5 grid where the players try to align five identical symbols. Specifics of the game require the usage of novel techniques. Using a combination of value iteration and backward induction, we propose the first complete analysis of the game. We describe memory-efficient data structures and algorithmic optimizations that make the game solvable within reasonable time and space constraints. Our main conclusion is that Quixo is a Draw game. The paper also contains the analysis of smaller boards and presents some interesting states extracted from our computations.},
urldate = {2021-01-02},
journal = {arXiv:2007.15895 [cs]},
author = {Tanaka, Satoshi and Bonnet, François and Tixeuil, Sébastien and Tamura, Yasumasa},
month = jul,
year = {2020},
note = {arXiv: 2007.15895},
keywords = {Computer Science - Computer Science and Game Theory},
annote = {Comment: 19 pages},
file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/ENGW8PNA/Tanaka et al. - 2020 - Quixo Is Solved.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/YZIUUDN9/2007.html:text/html}
}
@article{bertholon_at_2020,
title = {At {Most} 43 {Moves}, {At} {Least} 29: {Optimal} {Strategies} and {Bounds} for {Ultimate} {Tic}-{Tac}-{Toe}},
shorttitle = {At {Most} 43 {Moves}, {At} {Least} 29},
url = {http://arxiv.org/abs/2006.02353},
abstract = {Ultimate Tic-Tac-Toe is a variant of the well known tic-tac-toe (noughts and crosses) board game. Two players compete to win three aligned "fields", each of them being a tic-tac-toe game. Each move determines which field the next player must play in. We show that there exist a winning strategy for the first player, and therefore that there exist an optimal winning strategy taking at most 43 moves; that the second player can hold on at least 29 rounds; and identify any optimal strategy's first two moves.},
urldate = {2021-01-02},
journal = {arXiv:2006.02353 [cs]},
author = {Bertholon, Guillaume and Géraud-Stewart, Rémi and Kugelmann, Axel and Lenoir, Théo and Naccache, David},
month = jun,
year = {2020},
note = {arXiv: 2006.02353},
keywords = {Computer Science - Computer Science and Game Theory},
file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/USYULUK5/Bertholon et al. - 2020 - At Most 43 Moves, At Least 29 Optimal Strategies .pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/FWCEA7V4/2006.html:text/html}
}
@article{muller-brockhausen_new_2020,
title = {A {New} {Challenge}: {Approaching} {Tetris} {Link} with {AI}},
shorttitle = {A {New} {Challenge}},
url = {http://arxiv.org/abs/2004.00377},
abstract = {Decades of research have been invested in making computer programs for playing games such as Chess and Go. This paper focuses on a new game, Tetris Link, a board game that is still lacking any scientific analysis. Tetris Link has a large branching factor, hampering a traditional heuristic planning approach. We explore heuristic planning and two other approaches: Reinforcement Learning, Monte Carlo tree search. We document our approach and report on their relative performance in a tournament. Curiously, the heuristic approach is stronger than the planning/learning approaches. However, experienced human players easily win the majority of the matches against the heuristic planning AIs. We, therefore, surmise that Tetris Link is more difficult than expected. We offer our findings to the community as a challenge to improve upon.},
urldate = {2021-01-02},
journal = {arXiv:2004.00377 [cs]},
author = {Muller-Brockhausen, Matthias and Preuss, Mike and Plaat, Aske},
month = apr,
year = {2020},
note = {arXiv: 2004.00377},
keywords = {Computer Science - Artificial Intelligence},
file = {arXiv Fulltext PDF:/home/nemo/Zotero/storage/CJNXCN3A/Muller-Brockhausen et al. - 2020 - A New Challenge Approaching Tetris Link with AI.pdf:application/pdf;arXiv.org Snapshot:/home/nemo/Zotero/storage/4NNBCTUY/2004.html:text/html}
}

View File

@ -4659,11 +4659,386 @@ DOI: 10.1007/978-3-319-71649-7_5</dc:description>
<dcterms:dateSubmitted>2020-11-26 08:54:47</dcterms:dateSubmitted>
<z:linkMode>3</z:linkMode>
</z:Attachment>
<bib:Article rdf:about="http://arxiv.org/abs/2005.07156">
<z:itemType>journalArticle</z:itemType>
<dcterms:isPartOf>
<bib:Journal><dc:title>arXiv:2005.07156 [cs]</dc:title></bib:Journal>
</dcterms:isPartOf>
<bib:authors>
<rdf:Seq>
<rdf:li>
<foaf:Person>
<foaf:surname>Reinhardt</foaf:surname>
<foaf:givenName>Jack</foaf:givenName>
</foaf:Person>
</rdf:li>
</rdf:Seq>
</bib:authors>
<link:link rdf:resource="#item_258"/>
<link:link rdf:resource="#item_259"/>
<dc:subject>
<z:AutomaticTag>
<rdf:value>Computer Science - Artificial Intelligence</rdf:value>
</z:AutomaticTag>
</dc:subject>
<dc:subject>
<z:AutomaticTag>
<rdf:value>Computer Science - Multiagent Systems</rdf:value>
</z:AutomaticTag>
</dc:subject>
<dc:title>Competing in a Complex Hidden Role Game with Information Set Monte Carlo Tree Search</dc:title>
<dcterms:abstract>Advances in intelligent game playing agents have led to successes in perfect information games like Go and imperfect information games like Poker. The Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms outperforms previous algorithms using Monte Carlo methods in imperfect information games. In this paper, Single Observer Information Set Monte Carlo Tree Search (SO-ISMCTS) is applied to Secret Hitler, a popular social deduction board game that combines traditional hidden role mechanics with the randomness of a card deck. This combination leads to a more complex information model than the hidden role and card deck mechanics alone. It is shown in 10108 simulated games that SO-ISMCTS plays as well as simpler rule based agents, and demonstrates the potential of ISMCTS algorithms in complicated information set domains.</dcterms:abstract>
<dc:date>2020-05-14</dc:date>
<z:libraryCatalog>arXiv.org</z:libraryCatalog>
<dc:identifier>
<dcterms:URI>
<rdf:value>http://arxiv.org/abs/2005.07156</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2020-11-26 09:00:33</dcterms:dateSubmitted>
<dc:description>arXiv: 2005.07156</dc:description>
</bib:Article>
<z:Attachment rdf:about="#item_258">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv Fulltext PDF</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/pdf/2005.07156.pdf</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2020-11-26 09:01:03</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>application/pdf</link:type>
</z:Attachment>
<z:Attachment rdf:about="#item_259">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv.org Snapshot</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/abs/2005.07156</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2020-11-26 09:01:10</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>text/html</link:type>
</z:Attachment>
<bib:Article rdf:about="http://arxiv.org/abs/2009.12974">
<z:itemType>journalArticle</z:itemType>
<dcterms:isPartOf>
<bib:Journal><dc:title>arXiv:2009.12974 [cs]</dc:title></bib:Journal>
</dcterms:isPartOf>
<bib:authors>
<rdf:Seq>
<rdf:li>
<foaf:Person>
<foaf:surname>Ameneyro</foaf:surname>
<foaf:givenName>Fred Valdez</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Galvan</foaf:surname>
<foaf:givenName>Edgar</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Morales</foaf:surname>
<foaf:givenName>Anger Fernando Kuri</foaf:givenName>
</foaf:Person>
</rdf:li>
</rdf:Seq>
</bib:authors>
<dcterms:isReferencedBy rdf:resource="#item_285"/>
<link:link rdf:resource="#item_286"/>
<link:link rdf:resource="#item_287"/>
<dc:subject>
<z:AutomaticTag>
<rdf:value>Computer Science - Artificial Intelligence</rdf:value>
</z:AutomaticTag>
</dc:subject>
<dc:title>Playing Carcassonne with Monte Carlo Tree Search</dc:title>
<dcterms:abstract>Monte Carlo Tree Search (MCTS) is a relatively new sampling method with multiple variants in the literature. They can be applied to a wide variety of challenging domains including board games, video games, and energy-based problems to mention a few. In this work, we explore the use of the vanilla MCTS and the MCTS with Rapid Action Value Estimation (MCTS-RAVE) in the game of Carcassonne, a stochastic game with a deceptive scoring system where limited research has been conducted. We compare the strengths of the MCTS-based methods with the Star2.5 algorithm, previously reported to yield competitive results in the game of Carcassonne when a domain-specific heuristic is used to evaluate the game states. We analyse the particularities of the strategies adopted by the algorithms when they share a common reward system. The MCTS-based methods consistently outperformed the Star2.5 algorithm given their ability to find and follow long-term strategies, with the vanilla MCTS exhibiting a more robust game-play than the MCTS-RAVE.</dcterms:abstract>
<dc:date>2020-10-04</dc:date>
<z:libraryCatalog>arXiv.org</z:libraryCatalog>
<dc:identifier>
<dcterms:URI>
<rdf:value>http://arxiv.org/abs/2009.12974</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:13:09</dcterms:dateSubmitted>
<dc:description>arXiv: 2009.12974</dc:description>
</bib:Article>
<bib:Memo rdf:about="#item_285">
<rdf:value>Comment: 8 pages, 6 figures</rdf:value>
</bib:Memo>
<z:Attachment rdf:about="#item_286">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv Fulltext PDF</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/pdf/2009.12974.pdf</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:13:12</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>application/pdf</link:type>
</z:Attachment>
<z:Attachment rdf:about="#item_287">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv.org Snapshot</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/abs/2009.12974</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:13:17</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>text/html</link:type>
</z:Attachment>
<bib:Article rdf:about="http://arxiv.org/abs/2007.15895">
<z:itemType>journalArticle</z:itemType>
<dcterms:isPartOf>
<bib:Journal><dc:title>arXiv:2007.15895 [cs]</dc:title></bib:Journal>
</dcterms:isPartOf>
<bib:authors>
<rdf:Seq>
<rdf:li>
<foaf:Person>
<foaf:surname>Tanaka</foaf:surname>
<foaf:givenName>Satoshi</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Bonnet</foaf:surname>
<foaf:givenName>François</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Tixeuil</foaf:surname>
<foaf:givenName>Sébastien</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Tamura</foaf:surname>
<foaf:givenName>Yasumasa</foaf:givenName>
</foaf:Person>
</rdf:li>
</rdf:Seq>
</bib:authors>
<dcterms:isReferencedBy rdf:resource="#item_300"/>
<link:link rdf:resource="#item_301"/>
<link:link rdf:resource="#item_302"/>
<dc:subject>
<z:AutomaticTag>
<rdf:value>Computer Science - Computer Science and Game Theory</rdf:value>
</z:AutomaticTag>
</dc:subject>
<dc:title>Quixo Is Solved</dc:title>
<dcterms:abstract>Quixo is a two-player game played on a 5$\times$5 grid where the players try to align five identical symbols. Specifics of the game require the usage of novel techniques. Using a combination of value iteration and backward induction, we propose the first complete analysis of the game. We describe memory-efficient data structures and algorithmic optimizations that make the game solvable within reasonable time and space constraints. Our main conclusion is that Quixo is a Draw game. The paper also contains the analysis of smaller boards and presents some interesting states extracted from our computations.</dcterms:abstract>
<dc:date>2020-07-31</dc:date>
<z:libraryCatalog>arXiv.org</z:libraryCatalog>
<dc:identifier>
<dcterms:URI>
<rdf:value>http://arxiv.org/abs/2007.15895</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:17:10</dcterms:dateSubmitted>
<dc:description>arXiv: 2007.15895</dc:description>
</bib:Article>
<bib:Memo rdf:about="#item_300">
<rdf:value>Comment: 19 pages</rdf:value>
</bib:Memo>
<z:Attachment rdf:about="#item_301">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv Fulltext PDF</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/pdf/2007.15895.pdf</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:17:17</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>application/pdf</link:type>
</z:Attachment>
<z:Attachment rdf:about="#item_302">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv.org Snapshot</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/abs/2007.15895</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:17:21</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>text/html</link:type>
</z:Attachment>
<bib:Article rdf:about="http://arxiv.org/abs/2006.02353">
<z:itemType>journalArticle</z:itemType>
<dcterms:isPartOf>
<bib:Journal><dc:title>arXiv:2006.02353 [cs]</dc:title></bib:Journal>
</dcterms:isPartOf>
<bib:authors>
<rdf:Seq>
<rdf:li>
<foaf:Person>
<foaf:surname>Bertholon</foaf:surname>
<foaf:givenName>Guillaume</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Géraud-Stewart</foaf:surname>
<foaf:givenName>Rémi</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Kugelmann</foaf:surname>
<foaf:givenName>Axel</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Lenoir</foaf:surname>
<foaf:givenName>Théo</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Naccache</foaf:surname>
<foaf:givenName>David</foaf:givenName>
</foaf:Person>
</rdf:li>
</rdf:Seq>
</bib:authors>
<link:link rdf:resource="#item_304"/>
<link:link rdf:resource="#item_305"/>
<dc:subject>
<z:AutomaticTag>
<rdf:value>Computer Science - Computer Science and Game Theory</rdf:value>
</z:AutomaticTag>
</dc:subject>
<dc:title>At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe</dc:title>
<dcterms:abstract>Ultimate Tic-Tac-Toe is a variant of the well known tic-tac-toe (noughts and crosses) board game. Two players compete to win three aligned &quot;fields&quot;, each of them being a tic-tac-toe game. Each move determines which field the next player must play in. We show that there exist a winning strategy for the first player, and therefore that there exist an optimal winning strategy taking at most 43 moves; that the second player can hold on at least 29 rounds; and identify any optimal strategy's first two moves.</dcterms:abstract>
<dc:date>2020-06-06</dc:date>
<z:shortTitle>At Most 43 Moves, At Least 29</z:shortTitle>
<z:libraryCatalog>arXiv.org</z:libraryCatalog>
<dc:identifier>
<dcterms:URI>
<rdf:value>http://arxiv.org/abs/2006.02353</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:17:55</dcterms:dateSubmitted>
<dc:description>arXiv: 2006.02353</dc:description>
</bib:Article>
<z:Attachment rdf:about="#item_304">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv Fulltext PDF</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/pdf/2006.02353.pdf</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:17:57</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>application/pdf</link:type>
</z:Attachment>
<z:Attachment rdf:about="#item_305">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv.org Snapshot</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/abs/2006.02353</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:18:02</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>text/html</link:type>
</z:Attachment>
<bib:Article rdf:about="http://arxiv.org/abs/2004.00377">
<z:itemType>journalArticle</z:itemType>
<dcterms:isPartOf>
<bib:Journal><dc:title>arXiv:2004.00377 [cs]</dc:title></bib:Journal>
</dcterms:isPartOf>
<bib:authors>
<rdf:Seq>
<rdf:li>
<foaf:Person>
<foaf:surname>Muller-Brockhausen</foaf:surname>
<foaf:givenName>Matthias</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Preuss</foaf:surname>
<foaf:givenName>Mike</foaf:givenName>
</foaf:Person>
</rdf:li>
<rdf:li>
<foaf:Person>
<foaf:surname>Plaat</foaf:surname>
<foaf:givenName>Aske</foaf:givenName>
</foaf:Person>
</rdf:li>
</rdf:Seq>
</bib:authors>
<link:link rdf:resource="#item_307"/>
<link:link rdf:resource="#item_308"/>
<dc:subject>
<z:AutomaticTag>
<rdf:value>Computer Science - Artificial Intelligence</rdf:value>
</z:AutomaticTag>
</dc:subject>
<dc:title>A New Challenge: Approaching Tetris Link with AI</dc:title>
<dcterms:abstract>Decades of research have been invested in making computer programs for playing games such as Chess and Go. This paper focuses on a new game, Tetris Link, a board game that is still lacking any scientific analysis. Tetris Link has a large branching factor, hampering a traditional heuristic planning approach. We explore heuristic planning and two other approaches: Reinforcement Learning, Monte Carlo tree search. We document our approach and report on their relative performance in a tournament. Curiously, the heuristic approach is stronger than the planning/learning approaches. However, experienced human players easily win the majority of the matches against the heuristic planning AIs. We, therefore, surmise that Tetris Link is more difficult than expected. We offer our findings to the community as a challenge to improve upon.</dcterms:abstract>
<dc:date>2020-04-01</dc:date>
<z:shortTitle>A New Challenge</z:shortTitle>
<z:libraryCatalog>arXiv.org</z:libraryCatalog>
<dc:identifier>
<dcterms:URI>
<rdf:value>http://arxiv.org/abs/2004.00377</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:18:26</dcterms:dateSubmitted>
<dc:description>arXiv: 2004.00377</dc:description>
</bib:Article>
<z:Attachment rdf:about="#item_307">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv Fulltext PDF</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/pdf/2004.00377.pdf</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:18:32</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>application/pdf</link:type>
</z:Attachment>
<z:Attachment rdf:about="#item_308">
<z:itemType>attachment</z:itemType>
<dc:title>arXiv.org Snapshot</dc:title>
<dc:identifier>
<dcterms:URI>
<rdf:value>https://arxiv.org/abs/2004.00377</rdf:value>
</dcterms:URI>
</dc:identifier>
<dcterms:dateSubmitted>2021-01-02 18:18:38</dcterms:dateSubmitted>
<z:linkMode>1</z:linkMode>
<link:type>text/html</link:type>
</z:Attachment>
<z:Collection rdf:about="#collection_25">
<dc:title>Accessibility</dc:title>
<dcterms:hasPart rdf:resource="http://link.springer.com/10.1007/s40869-018-0057-8"/>
<dcterms:hasPart rdf:resource="http://link.springer.com/10.1007/s40869-018-0056-9"/>
</z:Collection>
<z:Collection rdf:about="#collection_33">
<dc:title>Carcassonne</dc:title>
<dcterms:hasPart rdf:resource="http://arxiv.org/abs/2009.12974"/>
</z:Collection>
<z:Collection rdf:about="#collection_8">
<dc:title>Diplomacy</dc:title>
<dcterms:hasPart rdf:resource="http://arxiv.org/abs/2006.04635v2"/>
@ -4779,6 +5154,7 @@ DOI: 10.1007/978-3-319-71649-7_5</dc:description>
<z:Collection rdf:about="#collection_18">
<dc:title>Quixo</dc:title>
<dcterms:hasPart rdf:resource="https://doi.org/10.1016%2Fj.ipl.2020.105995"/>
<dcterms:hasPart rdf:resource="http://arxiv.org/abs/2007.15895"/>
</z:Collection>
<z:Collection rdf:about="#collection_12">
<dc:title>Race for the Galaxy</dc:title>
@ -4796,6 +5172,10 @@ DOI: 10.1007/978-3-319-71649-7_5</dc:description>
<dcterms:hasPart rdf:resource="http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf"/>
<dcterms:hasPart rdf:resource="#item_144"/>
</z:Collection>
<z:Collection rdf:about="#collection_30">
<dc:title>Secret Hitler</dc:title>
<dcterms:hasPart rdf:resource="http://arxiv.org/abs/2005.07156"/>
</z:Collection>
<z:Collection rdf:about="#collection_19">
<dc:title>Set</dc:title>
<dcterms:hasPart rdf:resource="https://doi.org/10.4169%2Fmath.mag.85.2.083"/>
@ -4826,6 +5206,10 @@ DOI: 10.1007/978-3-319-71649-7_5</dc:description>
<dc:title>Terra Mystica</dc:title>
<dcterms:hasPart rdf:resource="https://doi.org/10.1145%2F3396474.3396492"/>
</z:Collection>
<z:Collection rdf:about="#collection_36">
<dc:title>Tetris Link</dc:title>
<dcterms:hasPart rdf:resource="http://arxiv.org/abs/2004.00377"/>
</z:Collection>
<z:Collection rdf:about="#collection_29">
<dc:title>Ticket to Ride</dc:title>
<dcterms:hasPart rdf:resource="urn:isbn:978-1-4503-5319-9"/>
@ -4834,6 +5218,10 @@ DOI: 10.1007/978-3-319-71649-7_5</dc:description>
<dcterms:hasPart rdf:resource="urn:isbn:978-1-4503-6571-0"/>
<dcterms:hasPart rdf:resource="https://www.rtealwitter.com/slides/2020-JMM.pdf"/>
</z:Collection>
<z:Collection rdf:about="#collection_35">
<dc:title>Ultimate Tic-Tac-Toe</dc:title>
<dcterms:hasPart rdf:resource="http://arxiv.org/abs/2006.02353"/>
</z:Collection>
<z:Collection rdf:about="#collection_17">
<dc:title>UNO</dc:title>
<dcterms:hasPart rdf:resource="https://doi.org/10.1007%2F978-3-642-13122-6_15"/>