conferencePaper
2014 IEEE Conference on Computational Intelligence and Games
DOI 10.1109/cig.2014.6932861
IEEE
Guhe
Markus
Lascarides
Alex
The effectiveness of persuasion in The Settlers of Catan
August 2014
https://doi.org/10.1109%2Fcig.2014.6932861
attachment
Submitted Version
https://www.pure.ed.ac.uk/ws/files/19353900/CIG2014.pdf
2020-07-20 18:34:57
1
application/pdf
journalArticle
10
International Journal of Gaming and Computer-Mediated Simulations
DOI 10.4018/ijgcms.2018040103
2
Boda
Márton Attila
Avoiding Revenge Using Optimal Opponent Ranking Strategy in the Board Game Catan
April 2018
https://doi.org/10.4018%2Fijgcms.2018040103
Publisher: IGI Global
47–70
attachment
Full Text
https://sci-hub.se/downloads/2020-05-28/3d/boda2018.pdf#view=FitH
2020-07-20 18:22:11
1
application/pdf
conferencePaper
2014 IEEE Conference on Computational Intelligence and Games
DOI 10.1109/cig.2014.6932884
IEEE
Guhe
Markus
Lascarides
Alex
Game strategies for The Settlers of Catan
August 2014
https://doi.org/10.1109%2Fcig.2014.6932884
attachment
Submitted Version
https://www.pure.ed.ac.uk/ws/files/19351482/CIG2014_GS.pdf
2020-07-20 18:24:09
1
application/pdf
bookSection
Lecture Notes in Computer Science
Springer Berlin Heidelberg
Szita
István
Chaslot
Guillaume
Spronck
Pieter
Monte-Carlo Tree Search in Settlers of Catan
2010
https://doi.org/10.1007%2F978-3-642-12993-3_3
DOI: 10.1007/978-3-642-12993-3_3
21–32
attachment
Full Text
https://zero.sci-hub.se/5140/3f6b582d932254ee1b7d29e6e9683934/szita2010.pdf#view=FitH
2020-07-20 18:29:58
1
application/pdf
bookSection
Multi-Agent Systems
Springer International Publishing
Xenou
Konstantia
Chalkiadakis
Georgios
Afantenos
Stergos
Deep Reinforcement Learning in Strategic Board Game Environments
2019
https://doi.org/10.1007%2F978-3-030-14174-5_16
DOI: 10.1007/978-3-030-14174-5_16
233–248
attachment
Accepted Version
https://oatao.univ-toulouse.fr/22647/1/xenou_22647.pdf
2020-07-20 18:10:35
1
application/pdf
journalArticle
41
Journal of the Operational Research Society
DOI 10.1057/jors.1990.2
1
Maliphant
Sarah A.
Smith
David K.
Mini-Risk: Strategies for a Simplified Board Game
January 1990
https://doi.org/10.1057%2Fjors.1990.2
Publisher: Informa UK Limited
9–16
attachment
Full Text
https://zero.sci-hub.se/4681/0e142dbe029d345411eb5019cea0b10a/maliphant1990.pdf#view=FitH
2020-07-20 18:28:37
1
application/pdf
conferencePaper
Proceedings of the 2002 ACM symposium on Applied computing - SAC \textquotesingle02
DOI 10.1145/508791.508904
ACM Press
Neves
Atila
Brasāo
Osvaldo
Rosa
Agostinho
Learning the risk board game with classifier systems
2002
https://doi.org/10.1145%2F508791.508904
attachment
Full Text
https://dacemirror.sci-hub.se/proceedings-article/f9ce3c906d4e89b8aa3b90f15f0dfe20/neves2002.pdf#view=FitH
2020-07-20 18:26:40
1
application/pdf
journalArticle
70
Mathematics Magazine
DOI 10.1080/0025570x.1997.11996573
5
Tan
Bariş
Markov Chains and the RISK Board Game
December 1997
https://doi.org/10.1080%2F0025570x.1997.11996573
Publisher: Informa UK Limited
349–357
attachment
Full Text
https://twin.sci-hub.se/6853/3bdc3204e08f60618dca66f19b9cd1fc/markov-chains-and-the-risk-board-game-1997.pdf#view=FitH
2020-07-20 18:28:15
1
application/pdf
journalArticle
76
Mathematics Magazine
DOI 10.1080/0025570x.2003.11953165
2
Osborne
Jason A.
Markov Chains for the RISK Board Game Revisited
April 2003
https://doi.org/10.1080%2F0025570x.2003.11953165
Publisher: Informa UK Limited
129–135
attachment
Full Text
https://twin.sci-hub.se/6908/ad9e3c21b4a5edae31079e43ad12c8ce/osborne2003.pdf#view=FitH
2020-07-20 18:28:23
1
application/pdf
journalArticle
9
IEEE Trans. Evol. Computat.
DOI 10.1109/tevc.2005.856211
6
Vaccaro
J. M.
Guest
C. C.
Planning an Endgame Move Set for the Game RISK: A Comparison of Search Algorithms
December 2005
https://doi.org/10.1109%2Ftevc.2005.856211
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
641–652
attachment
Full Text
https://moscow.sci-hub.se/1819/0e61163e1173d174a5261879afc2c42d/vaccaro2005.pdf#view=FitH
2020-07-20 18:31:44
1
application/pdf
conferencePaper
2018 IEEE Conference on Computational Intelligence and Games (CIG)
DOI 10.1109/cig.2018.8490419
IEEE
Gedda
Magnus
Lagerkvist
Mikael Z.
Butler
Martin
Monte Carlo Methods for the Game Kingdomino
August 2018
https://doi.org/10.1109%2Fcig.2018.8490419
attachment
Submitted Version
https://arxiv.org/pdf/1807.04458
2020-07-20 18:29:37
1
application/pdf
journalArticle
88
Mathematics Magazine
DOI 10.4169/math.mag.88.5.323
5
Cox
Christopher
Silva
Jessica De
Deorsey
Philip
Kenter
Franklin H. J.
Retter
Troy
Tobin
Josh
How to Make the Perfect Fireworks Display: Two Strategies forHanabi
December 2015
https://doi.org/10.4169%2Fmath.mag.88.5.323
Publisher: Informa UK Limited
323–336
attachment
Full Text
https://moscow.sci-hub.se/5019/aae1c968ecb4576818556c669a20e535/christophercox2015.pdf#view=FitH
2020-07-20 18:26:02
1
application/pdf
conferencePaper
2017 IEEE Congress on Evolutionary Computation (CEC)
DOI 10.1109/cec.2017.7969465
IEEE
Walton-Rivers
Joseph
Williams
Piers R.
Bartle
Richard
Perez-Liebana
Diego
Lucas
Simon M.
Evaluating and modelling Hanabi-playing agents
June 2017
https://doi.org/10.1109%2Fcec.2017.7969465
attachment
Accepted Version
https://repository.essex.ac.uk/20341/1/1704.07069v1.pdf
2020-07-20 18:16:01
1
application/pdf
journalArticle
280
Artificial Intelligence
DOI 10.1016/j.artint.2019.103216
Bard
Nolan
Foerster
Jakob N.
Chandar
Sarath
Burch
Neil
Lanctot
Marc
Song
H. Francis
Parisotto
Emilio
Dumoulin
Vincent
Moitra
Subhodeep
Hughes
Edward
Dunning
Iain
Mourad
Shibl
Larochelle
Hugo
Bellemare
Marc G.
Bowling
Michael
The Hanabi challenge: A new frontier for AI research
March 2020
https://doi.org/10.1016%2Fj.artint.2019.103216
Publisher: Elsevier BV
103216
attachment
Full Text
https://pdf.sciencedirectassets.com/271585/1-s2.0-S0004370219X00120/1-s2.0-S0004370219300116/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHoaCXVzLWVhc3QtMSJGMEQCIH1Rh0U0FaCQrRRyUSiT86nBmiJQoMaX1VTfIk3CSCBuAiBVtUw9mFSXO4P2IR2s2Cqs5kI4CwPLTkYQLjfgja598Sq0AwgzEAMaDDA1OTAwMzU0Njg2NSIMyb025En2S0rT1ZZeKpEDm01bDtAwa5C41YjSxTlisMeVuPdMt%2Fpp3ln9ZE3BEsM0TrGG0EvzEx3c85DHMJoG3oK32WnIGWwPieFnqEZGQJ5kP%2BEtnPQMUzxY7WXMipZQ%2Fmkao3oL%2Fu%2BbKsSHJ%2BAolhMh0G%2F1YvpVoepC6rGy8rku6DXcS0XWgvUzcoLJlcPRRsF5pGQp6xFOR1RW2pR6oq9UoJWEImFDw6X2g2MgPaFMRXMo71FNVX7Zix2oWGhaDRS0hMEerYFvmj1Lv0rmQbAo2h0lvvTZkGWrcauFFEjILxJFadqwcK4Xfe%2BfFDR2H61VZ7B792qzjSC8vCsAToK8BSVepuCnVpDM04cKPnrsiqtR2WMuCMYlS2w%2BLRCk03EjXQvU8ZR8J63MmEPpbJ7pS6JnI%2B0nCcCYtvb7yqcWTzmHjPN6ssNUpPX1ajjjlZyaFm5ntqyCxL0tYar6ra3TkmF2Lbesk%2F1wLTuqnLhsIEmzB01wzw%2BEd%2BxW7PZnDHKM%2FeBPaeg1Z2VaL%2BuwDGWMhBnk8O2sUafc2qT%2BRzMwuLHX%2BAU67AH1wVDVhIqhENwF1N9Pv%2BLhicrQXiHJ79HC1JGf9ulqBM9sLRnFjdyxRYUm6O%2F9RPSV6OTVARQGNQpBBqJUN5%2BCfvHvl%2BVgfEa3fTERLxkX1QBTsTGKAzKee1BjQylRYNnTLhUm0CV56l3jCBMG9LwpodFZgoMHURoqbkWDV9HkAS9W6LnIvcy4L7e76c3qrpB%2B1XoEzqX%2BpQ3S4lRCVR5NcTLnUmxww4%2Br8nzca5CibEM9l033knReVVZnjt2JZVzZVW6zXpO60OH%2F6W2N1qYvsY3oOLT7kX5w7GNCi4qrNANQLGrPXDDqLOuI2A%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200720T183502Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTY6ZAVFJHH%2F20200720%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=15beeda55689b6fd5e9d2d05b5ac322d5b98fdc5d4f70683cb530821a98eef3c&hash=9fe7e987a2d4bc206328d68796fc691838e0a0ff5373120293f9b5d82fefded1&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S0004370219300116&tid=spdf-54a6ffb6-7f42-4d75-84e1-5f15f936d894&sid=76772d953af62548ae38ab517857dea18c60gxrqb&type=client
2020-07-20 18:35:12
1
application/pdf
conferencePaper
2019 IEEE Conference on Games (CoG)
DOI 10.1109/cig.2019.8848008
IEEE
Walton-Rivers
Joseph
Williams
Piers R.
Bartle
Richard
The 2018 Hanabi competition
August 2019
https://doi.org/10.1109%2Fcig.2019.8848008
attachment
Accepted Version
https://repository.essex.ac.uk/26898/2/hanabi.pdf
2020-07-20 18:34:35
1
application/pdf
conferencePaper
2019 IEEE Conference on Games (CoG)
DOI 10.1109/cig.2019.8847944
IEEE
Canaan
Rodrigo
Togelius
Julian
Nealen
Andy
Menzel
Stefan
Diverse Agents for Ad-Hoc Cooperation in Hanabi
August 2019
https://doi.org/10.1109%2Fcig.2019.8847944
attachment
Submitted Version
https://arxiv.org/pdf/1907.03840
2020-07-20 18:11:10
1
application/pdf
journalArticle
45
Mathematics Magazine
DOI 10.1080/0025570x.1972.11976187
1
Ash
Robert B.
Bishop
Richard L.
Monopoly as a Markov Process
January 1972
https://doi.org/10.1080%2F0025570x.1972.11976187
Publisher: Informa UK Limited
26–29
attachment
Submitted Version
https://www.math.uiuc.edu/%7Ebishop/monopoly.pdf
2020-07-20 18:37:15
1
application/pdf
journalArticle
4
IEEE Trans. Comput. Intell. AI Games
DOI 10.1109/tciaig.2012.2204883
4
Cowling
Peter I.
Ward
Colin D.
Powley
Edward J.
Ensemble Determinization in Monte Carlo Tree Search for the Imperfect Information Card Game Magic: The Gathering
December 2012
https://doi.org/10.1109%2Ftciaig.2012.2204883
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
241–257
attachment
Accepted Version
https://eprints.whiterose.ac.uk/75050/1/EnsDetMagic.pdf
2020-07-20 18:14:45
1
application/pdf
journalArticle
31
The College Mathematics Journal
DOI 10.1080/07468342.2000.11974103
1
Bosch
Robert A.
Optimal Card-Collecting Strategies for Magic: The Gathering
January 2000
https://doi.org/10.1080%2F07468342.2000.11974103
Publisher: Informa UK Limited
15–21
attachment
Full Text
https://zero.sci-hub.se/6795/ba844bedd2d417e4393d7af19bb3dd47/bosch2000.pdf#view=FitH
2020-07-20 18:37:22
1
application/pdf
conferencePaper
2009 IEEE Symposium on Computational Intelligence and Games
DOI 10.1109/cig.2009.5286501
IEEE
Ward
C. D.
Cowling
P. I.
Monte Carlo search applied to card selection in Magic: The Gathering
September 2009
https://doi.org/10.1109%2Fcig.2009.5286501
attachment
Full Text
https://dacemirror.sci-hub.se/proceedings-article/dfcfc3f5502682650ac71b68af8f9b19/ward2009.pdf#view=FitH
2020-07-20 18:29:50
1
application/pdf
bookSection
Lecture Notes in Computer Science
Springer Berlin Heidelberg
Demaine
Erik D.
Demaine
Martin L.
Uehara
Ryuhei
Uno
Takeaki
Uno
Yushi
UNO Is Hard, Even for a Single Player
2010
https://doi.org/10.1007%2F978-3-642-13122-6_15
DOI: 10.1007/978-3-642-13122-6_15
133–144
attachment
Submitted Version
https://dspace.mit.edu/bitstream/1721.1/62147/1/Demaine_UNO%20is.pdf
2020-07-20 18:36:18
1
application/pdf
journalArticle
Information Processing Letters
DOI 10.1016/j.ipl.2020.105995
Mishiba
Shohei
Takenaga
Yasuhiko
QUIXO is EXPTIME-complete
July 2020
https://doi.org/10.1016%2Fj.ipl.2020.105995
Publisher: Elsevier BV
105995
attachment
Full Text
https://pdf.sciencedirectassets.com/271527/AIP/1-s2.0-S002001902030082X/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHoaCXVzLWVhc3QtMSJGMEQCIFtkSBpgjQCGD2t9HhDGKXByeuMLxjq5SpZiHiVJRtD2AiBGNfyOHc5rhR9YOWBJqfm4Q4sk9A7DiAYQK4bE21l10yq0AwgyEAMaDDA1OTAwMzU0Njg2NSIMTbV17TBAQ%2BDOw4NqKpEDFYUi3wRhC%2Baj5%2FaTwaaOsbSSQ1WVXW9J%2FkDFGuUgFScfYqdG0aRaazztSFianGDgj1FEpVC%2FwLMP8LEWFghexDo2fLhZpoaNA5v8DQIvrvb839ZJGlCB9HEcbyeStsLWWrl8pM1lYBckbsmir72eSqxkPqyFfdxni2pG4HcCVuJHe6pPJwoGPGoTndv1mCghHzuk9rvPiegQ9iaKu947uL9xnhB1c7TzMUf2EGPeKuB2jm4F5duW8V3IzqQjPf3tMPSRNn8Ztv1qlO8vUhpXTsyI5dH%2BURZTqOVp0fVn4En6CRNrkv05g%2B1rxq6b6gQmlfUeAIPaTwUfI2glYGtZKNkvlkYrZKoWvHkv9XzLd3%2FKiuaeKxM9nk4hZjJqtcWwaD3Gp9yr63IUPqUZW5BI2YJHNW%2B9SIbRzBmubE0b01LVFubW9rJo3hPtgKRHPpIEIm0j%2FjoszFdpyL4chFaML0HxrCmQeh7HkvJBMvERUM%2F882V%2FBm2zHRuKsPNpLUj%2BJ%2BDh6%2FQaE0pmoIOfTbNN8xIwnaLX%2BAU67AFOxpNyX5biC5h3HLfyBGY1KrsDmnyo3bcOIqAwepos4Dw%2BlQ8II9AjVQwzEJGvd8LQ9sYaZzntH7rnuZG15wBizwUDkD2G37c91hT%2BG9hKPKHkW9jZp6XijVMHYWhd34TF6iW%2BQSM5bMzKdQaXzoilRts%2B5DaLCeYk%2Fzc2FFcSMOT3pXBXWHr%2F16lr5Sp8Gh9FS9HnwI2O8pxy6E1lqGM0wP%2FwaWBT3HgdR2tvjQzn%2BcjDDKHABtqj3oo6janO3IKOPaFzYHFqiL0DS8Pet5gUVYynm9m37o5M3%2B6y7YoaXnoq1o1goiX6Zv0S5A%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200720T183148Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTYWL2N32C3%2F20200720%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=99c8222a858071c0b65fa6dac4ea0e1b548c1a4eb23ccac06cf864c7013a7593&hash=d2c7df4c2396ff204caa66ac0553e18f1b2712399c07b6e674199859ccc1b7f9&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S002001902030082X&tid=spdf-742da070-9da3-43f1-861e-f733776129ef&sid=76772d953af62548ae38ab517857dea18c60gxrqb&type=client
2020-07-20 18:31:53
1
application/pdf
bookSection
Case-Based Reasoning Research and Development
Springer International Publishing
Woolford
Michael
Watson
Ian
SCOUT: A Case-Based Reasoning Agent for Playing Race for the Galaxy
2017
https://doi.org/10.1007%2F978-3-319-61030-6_27
DOI: 10.1007/978-3-319-61030-6_27
390–402
attachment
Woolford and Watson - 2017 - SCOUT A Case-Based Reasoning Agent for Playing Ra.pdf
application/pdf
journalArticle
85
Mathematics Magazine
DOI 10.4169/math.mag.85.2.083
2
Coleman
Ben
Hartshorn
Kevin
Game, Set, Math
April 2012
https://doi.org/10.4169%2Fmath.mag.85.2.083
Publisher: Informa UK Limited
83–96
attachment
Full Text
https://dacemirror.sci-hub.se/journal-article/768dabc67f6adcaa34a4c087b56b4283/game-set-math-2012.pdf#view=FitH
2020-07-20 18:24:32
1
application/pdf
journalArticle
125
The American Mathematical Monthly
DOI 10.1080/00029890.2018.1412661
3
Glass
Darren
The Joy of SET
February 2018
https://doi.org/10.1080%2F00029890.2018.1412661
Publisher: Informa UK Limited
284–288
attachment
Full Text
https://twin.sci-hub.se/6684/b949dbda2e3438aae344825abb7d0ff3/glass2018.pdf#view=FitH
2020-07-20 18:35:34
1
application/pdf
bookSection
Communications in Computer and Information Science
Springer Berlin Heidelberg
Lazarusli
Irene A.
Lukas
Samuel
Widjaja
Patrick
Implementation of Artificial Intelligence with 3 Different Characters of AI Player on “Monopoly Deal” Computer Game
2015
https://doi.org/10.1007%2F978-3-662-46742-8_11
DOI: 10.1007/978-3-662-46742-8_11
119–127
bookSection
Computers and Games
Springer Berlin Heidelberg
Pawlewicz
Jakub
Nearly Optimal Computer Play in Multi-player Yahtzee
2011
https://doi.org/10.1007%2F978-3-642-17928-0_23
DOI: 10.1007/978-3-642-17928-0_23
250–262
conferencePaper
2007 IEEE Symposium on Computational Intelligence and Games
DOI 10.1109/cig.2007.368089
IEEE
Glenn
James R.
Computer Strategies for Solitaire Yahtzee
2007
https://doi.org/10.1109%2Fcig.2007.368089
attachment
Submitted Version
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=707B7E09A9652A1E4F2AB4BF608C410A?doi=10.1.1.111.1724&rep=rep1&type=pdf
2020-07-20 18:09:04
1
application/pdf
journalArticle
18
Expert Systems
DOI 10.1111/1468-0394.00160
2
Maynard
Ken
Moss
Patrick
Whitehead
Marcus
Narayanan
S.
Garay
Matt
Brannon
Nathan
Kantamneni
Raj Gopal
Kustra
Todd
Modeling expert problem solving in a game of chance: a Yahtzeec case study
May 2001
https://doi.org/10.1111%2F1468-0394.00160
Publisher: Wiley
88–98
attachment
Full Text
https://cyber.sci-hub.se/MTAuMTExMS8xNDY4LTAzOTQuMDAxNjA=/maynard2001.pdf#view=FitH
2020-07-20 18:29:00
1
application/pdf
bookSection
Computers and Games
Springer International Publishing
Oka
Kazuto
Matsuzaki
Kiminori
Systematic Selection of N-Tuple Networks for 2048
2016
https://doi.org/10.1007%2F978-3-319-50935-8_8
DOI: 10.1007/978-3-319-50935-8_8
81–92
attachment
Full Text
https://sci-hub.se/downloads/2020-05-25/5f/oka2016.pdf#view=FitH
2020-07-20 18:32:30
1
application/pdf
conferencePaper
2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)
DOI 10.1109/taai.2016.7880154
IEEE
Matsuzaki
Kiminori
Systematic selection of N-tuple networks with consideration of interinfluence for game 2048
November 2016
https://doi.org/10.1109%2Ftaai.2016.7880154
attachment
Full Text
https://twin.sci-hub.se/6299/d9bbecbbec212dab7fe6e6a67213b1cb/matsuzaki2016.pdf#view=FitH
2020-07-20 18:32:39
1
application/pdf
conferencePaper
2014 IEEE Conference on Computational Intelligence and Games
DOI 10.1109/cig.2014.6932920
IEEE
Rodgers
Philip
Levine
John
An investigation into 2048 AI strategies
August 2014
https://doi.org/10.1109%2Fcig.2014.6932920
attachment
Full Text
https://zero.sci-hub.se/3377/2e196ce6e3cb06a636bf1ffdee8f5b6f/rodgers2014.pdf#view=FitH
2020-07-20 18:21:23
1
application/pdf
journalArticle
arxiv:2006.04635
Anthony
Thomas
Eccles
Tom
Tacchetti
Andrea
Kramár
János
Gemp
Ian
Hudson
Thomas C.
Porcel
Nicolas
Lanctot
Marc
Pérolat
Julien
Everett
Richard
Singh
Satinder
Graepel
Thore
Bachrach
Yoram
Learning to Play No-Press Diplomacy with Best Response Policy Iteration
2020
http://arxiv.org/abs/2006.04635v2
attachment
Full Text
https://arxiv.org/pdf/2006.04635v2.pdf
2020-07-20 18:27:18
1
application/pdf
journalArticle
arxiv:1909.02128
Paquette
Philip
Lu
Yuchen
Bocco
Steven
Smith
Max O.
Ortiz-Gagne
Satya
Kummerfeld
Jonathan K.
Singh
Satinder
Pineau
Joelle
Courville
Aaron
No Press Diplomacy: Modeling Multi-Agent Gameplay
2019
http://arxiv.org/abs/1909.02128v2
attachment
Full Text
https://arxiv.org/pdf/1909.02128v2.pdf
2020-07-20 18:31:04
1
application/pdf
journalArticle
arxiv:1902.06996
Tan
Hao Hao
Agent Madoff: A Heuristic-Based Negotiation Agent For The Diplomacy Strategy Game
2019
http://arxiv.org/abs/1902.06996v1
attachment
Full Text
https://arxiv.org/pdf/1902.06996v1.pdf
2020-07-20 18:21:06
1
application/pdf
journalArticle
arxiv:1807.04458
Gedda
Magnus
Lagerkvist
Mikael Z.
Butler
Martin
Monte Carlo Methods for the Game Kingdomino
2018
http://arxiv.org/abs/1807.04458v2
attachment
Full Text
https://arxiv.org/pdf/1807.04458v2.pdf
2020-07-20 18:29:18
1
application/pdf
journalArticle
arxiv:1909.02849
Nguyen
Viet-Ha
Perrot
Kevin
Vallet
Mathieu
NP-completeness of the game Kingdomino
2019
http://arxiv.org/abs/1909.02849v3
attachment
Full Text
https://arxiv.org/pdf/1909.02849v3.pdf
2020-07-20 18:31:12
1
application/pdf
journalArticle
arxiv:1912.02318
Lerer
Adam
Hu
Hengyuan
Foerster
Jakob
Brown
Noam
Improving Policies via Search in Cooperative Partially Observable Games
2019
http://arxiv.org/abs/1912.02318v1
attachment
Full Text
https://arxiv.org/pdf/1912.02318v1.pdf
2020-07-20 18:26:28
1
application/pdf
journalArticle
arxiv:1603.01911
Baffier
Jean-Francois
Chiu
Man-Kwun
Diez
Yago
Korman
Matias
Mitsou
Valia
Renssen
André van
Roeloffzen
Marcel
Uno
Yushi
Hanabi is NP-hard, Even for Cheaters who Look at Their Cards
2016
http://arxiv.org/abs/1603.01911v3
attachment
Full Text
https://arxiv.org/pdf/1603.01911v3.pdf
2020-07-20 18:25:31
1
application/pdf
journalArticle
arxiv:2004.13710
Canaan
Rodrigo
Gao
Xianbo
Togelius
Julian
Nealen
Andy
Menzel
Stefan
Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi
2020
http://arxiv.org/abs/2004.13710v2
attachment
Full Text
https://arxiv.org/pdf/2004.13710v2.pdf
2020-07-20 18:25:19
1
application/pdf
journalArticle
arxiv:2004.13291
Canaan
Rodrigo
Gao
Xianbo
Chung
Youjin
Togelius
Julian
Nealen
Andy
Menzel
Stefan
Evaluating the Rainbow DQN Agent in Hanabi with Unseen Partners
2020
http://arxiv.org/abs/2004.13291v1
attachment
Full Text
https://arxiv.org/pdf/2004.13291v1.pdf
2020-07-20 18:22:45
1
application/pdf
journalArticle
arxiv:2003.05119
Biderman
Stella
Magic: the Gathering is as Hard as Arithmetic
2020
http://arxiv.org/abs/2003.05119v1
attachment
Full Text
https://arxiv.org/pdf/2003.05119v1.pdf
2020-07-20 18:27:42
1
application/pdf
journalArticle
arxiv:1904.09828
Churchill
Alex
Biderman
Stella
Herrick
Austin
Magic: The Gathering is Turing Complete
2019
http://arxiv.org/abs/1904.09828v2
attachment
Full Text
https://arxiv.org/pdf/1904.09828v2.pdf
2020-07-20 18:27:51
1
application/pdf
journalArticle
arxiv:1810.03744
Zilio
Felipe
Prates
Marcelo
Neural Networks Models for Analyzing Magic: the Gathering Cards
2018
http://arxiv.org/abs/1810.03744v1
attachment
Full Text
https://arxiv.org/pdf/1810.03744v1.pdf
2020-07-20 18:30:42
1
application/pdf
conferencePaper
Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence
DOI 10.1145/3396474.3396492
ACM
Grichshenko
Alexandr
Araújo
Luiz Jonatã Pires de
Gimaeva
Susanna
Brown
Joseph Alexander
Using Tabu Search Algorithm for Map Generation in the Terra Mystica Tabletop Game
March 2020
https://doi.org/10.1145%2F3396474.3396492
attachment
Submitted Version
https://arxiv.org/pdf/2006.02716
2020-07-20 18:36:31
1
application/pdf
journalArticle
arxiv:1009.1031
Migdał
Piotr
A mathematical model of the Mafia game
2010
http://arxiv.org/abs/1009.1031v3
attachment
Full Text
https://arxiv.org/pdf/1009.1031v3.pdf
2020-07-20 18:20:44
1
application/pdf
journalArticle
arxiv:1003.2851
Demaine
Erik D.
Demaine
Martin L.
Harvey
Nicholas J. A.
Uehara
Ryuhei
Uno
Takeaki
Uno
Yushi
The complexity of UNO
2010
http://arxiv.org/abs/1003.2851v3
attachment
Full Text
https://arxiv.org/pdf/1003.2851v3.pdf
2020-07-20 18:34:43
1
application/pdf
journalArticle
arxiv:1603.00928
Almanza
Matteo
Leucci
Stefano
Panconesi
Alessandro
Trainyard is NP-Hard
2016
http://arxiv.org/abs/1603.00928v1
attachment
Full Text
https://arxiv.org/pdf/1603.00928v1.pdf
2020-07-20 18:36:08
1
application/pdf
journalArticle
arxiv:1505.04274
Langerman
Stefan
Uno
Yushi
Threes!, Fives, 1024!, and 2048 are Hard
2015
http://arxiv.org/abs/1505.04274v1
attachment
Full Text
https://arxiv.org/pdf/1505.04274v1.pdf
2020-07-20 18:35:46
1
application/pdf
journalArticle
arxiv:1804.07396
Eppstein
David
Making Change in 2048
2018
http://arxiv.org/abs/1804.07396v1
attachment
Full Text
https://arxiv.org/pdf/1804.07396v1.pdf
2020-07-20 18:28:01
1
application/pdf
journalArticle
arxiv:1804.07393
Das
Madhuparna
Paul
Goutam
Analysis of the Game "2048" and its Generalization in Higher Dimensions
2018
http://arxiv.org/abs/1804.07393v2
attachment
Full Text
https://arxiv.org/pdf/1804.07393v2.pdf
2020-07-20 18:21:31
1
application/pdf
journalArticle
arxiv:1606.07374
Yeh
Kun-Hao
Wu
I.-Chen
Hsueh
Chu-Hsuan
Chang
Chia-Chuan
Liang
Chao-Chin
Chiang
Han
Multi-Stage Temporal Difference Learning for 2048-like Games
2016
http://arxiv.org/abs/1606.07374v2
attachment
Full Text
https://arxiv.org/pdf/1606.07374v2.pdf
2020-07-20 18:30:19
1
application/pdf
journalArticle
arxiv:1408.6315
Mehta
Rahul
2048 is (PSPACE) Hard, but Sometimes Easy
2014
http://arxiv.org/abs/1408.6315v1
attachment
Full Text
https://arxiv.org/pdf/1408.6315v1.pdf
2020-07-20 18:20:36
1
application/pdf
computerProgram
Settlers of Catan bot trained using reinforcement learning
https://jonzia.github.io/Catan/
MATLAB
conferencePaper
34
Proceedings of the Annual Meeting of the Cognitive Science Society
Guhe
Markus
Lascarides
Alex
Trading in a multiplayer board game: Towards an analysis of non-cooperative dialogue
2012
https://escholarship.org/uc/item/9zt506xx
Issue: 34
attachment
Guhe and Lascarides - 2012 - Trading in a multiplayer board game Towards an an.pdf
application/pdf
journalArticle
POMCP with Human Preferencesin Settlers of Catan
https://www.aaai.org/ocs/index.php/AIIDE/AIIDE18/paper/viewFile/18091/17217
attachment
POMCP with Human Preferencesin Settlers of Catan.pdf
application/pdf
blogPost
The impact of loaded dice in Catan
https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html
journalArticle
Monte Carlo Tree Search in a Modern Board Game Framework
https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf
attachment
Full Text
https://project.dke.maastrichtuniversity.nl/games/files/bsc/Roelofs_Bsc-paper.pdf
2020-07-20 18:47:19
1
application/pdf
conferencePaper
Pfeiffer
Michael
Reinforcement Learning of Strategies for Settlers of Catan
2004
https://www.researchgate.net/publication/228728063_Reinforcement_learning_of_strategies_for_Settlers_of_Catan
attachment
Pfeiffer - 2004 - Reinforcement Learning of Strategies for Settlers .pdf
application/pdf
presentation
Michael Wolf
An Intelligent Artificial Player for the Game of Risk
20/04/2005
http://www.ke.tu-darmstadt.de/lehre/archiv/ss04/oberseminar/folien/Wolf_Michael-Slides.pdf
attachment
An Intelligent Artificial Player for the Game of R.pdf
application/pdf
journalArticle
RISKy Business: An In-Depth Look at the Game RISK
https://scholar.rose-hulman.edu/rhumj/vol3/iss2/3
attachment
RISKy Business An In-Depth Look at the Game RISK.pdf
application/pdf
journalArticle
RISK Board Game ‐ Battle Outcome Analysis
http://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf
attachment
Full Text
https://www.c4i.gr/xgeorgio/docs/RISK-board-game%20_rev-3.pdf
2020-07-20 18:54:23
1
application/pdf
thesis
Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
Olsson
Fredrik
A multi-agent system for playing the board game risk
Risk is a game in which traditional Artificial-Intelligence methods such as for example iterative deepening and Alpha-Beta pruning can not successfully be applied due to the size of the search space. Distributed problem solving in the form of a multi-agent system might be the solution. This needs to be tested before it is possible to tell if a multi-agent system will be successful at playing Risk or not. In this thesis the development of a multi-agent system that plays Risk is explained. The system places an agent in every country on the board and uses a central agent for organizing communication. An auction mechanism is used for negotiation. The experiments show that a multi-agent solution indeed is a prosperous approach when developing a computer based player for the board game Risk.
2005
http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3781
51
Independent thesis Advanced level (degree of Master (One Year))
attachment
Full Text
http://bth.diva-portal.org/smash/get/diva2:831093/FULLTEXT01
2021-07-24 08:26:48
1
application/pdf
attachment
Full Text
https://www.diva-portal.org/smash/get/diva2:831093/FULLTEXT01.pdf
2021-07-24 08:28:25
3
blogPost
State Representation and Polyomino Placement for the Game Patchwork
https://zayenz.se/blog/post/patchwork-modref2019-paper/
journalArticle
arXiv:2001.04233 [cs]
Lagerkvist
Mikael Zayenz
Computer Science - Artificial Intelligence
State Representation and Polyomino Placement for the Game Patchwork
Modern board games are a rich source of entertainment for many people, but also contain interesting and challenging structures for game playing research and implementing game playing agents. This paper studies the game Patchwork, a two player strategy game using polyomino tile drafting and placement. The core polyomino placement mechanic is implemented in a constraint model using regular constraints, extending and improving the model in (Lagerkvist, Pesant, 2008) with: explicit rotation handling; optional placements; and new constraints for resource usage. Crucial for implementing good game playing agents is to have great heuristics for guiding the search when faced with large branching factors. This paper divides placing tiles into two parts: a policy used for placing parts and an evaluation used to select among different placements. Policies are designed based on classical packing literature as well as common standard constraint programming heuristics. For evaluation, global propagation guided regret is introduced, choosing placements based on not ruling out later placements. Extensive evaluations are performed, showing the importance of using a good evaluation and that the proposed global propagation guided regret is a very effective guide.
2020-01-13
arXiv.org
http://arxiv.org/abs/2001.04233
2020-07-21 10:55:58
arXiv: 2001.04233
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2001.04233.pdf
2020-07-21 10:56:09
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2001.04233
2020-07-21 10:56:13
1
text/html
presentation
State Representation and Polyomino Placement for the Game Patchwork
https://zayenz.se/papers/Lagerkvist_ModRef_2019_Presentation.pdf
attachment
Full Text
https://zayenz.se/papers/Lagerkvist_ModRef_2019_Presentation.pdf
2020-07-21 10:56:59
1
application/pdf
journalArticle
arXiv:2001.04238 [cs]
Lagerkvist
Mikael Zayenz
Computer Science - Artificial Intelligence
Nmbr9 as a Constraint Programming Challenge
Modern board games are a rich source of interesting and new challenges for combinatorial problems. The game Nmbr9 is a solitaire style puzzle game using polyominoes. The rules of the game are simple to explain, but modelling the game effectively using constraint programming is hard. This abstract presents the game, contributes new generalized variants of the game suitable for benchmarking and testing, and describes a model for the presented variants. The question of the top possible score in the standard game is an open challenge.
2020-01-13
arXiv.org
http://arxiv.org/abs/2001.04238
2020-07-21 10:57:58
arXiv: 2001.04238
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2001.04238.pdf
2020-07-21 10:58:01
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2001.04238
2020-07-21 10:58:02
1
text/html
blogPost
Nmbr9 as a Constraint Programming Challenge
https://zayenz.se/blog/post/nmbr9-cp2019-abstract/
conferencePaper
DOI 10.1109/CIG.2019.8848097
Goodman
James
Re-determinizing MCTS in Hanabi
08 2019
1-8
attachment
Goodman - 2019 - Re-determinizing MCTS in Hanabi.pdf
application/pdf
conferencePaper
ISBN 978-1-5386-4359-4
2018 IEEE Conference on Computational Intelligence and Games (CIG)
DOI 10.1109/CIG.2018.8490449
Maastricht
IEEE
Canaan
Rodrigo
Shen
Haotian
Torrado
Ruben
Togelius
Julian
Nealen
Andy
Menzel
Stefan
Evolving Agents for the Hanabi 2018 CIG Competition
8/2018
DOI.org (Crossref)
https://ieeexplore.ieee.org/document/8490449/
2020-07-21 11:01:52
1-8
2018 IEEE Conference on Computational Intelligence and Games (CIG)
attachment
Submitted Version
https://arxiv.org/pdf/1809.09764
2020-07-21 11:01:56
1
application/pdf
bookSection
765
ISBN 978-3-319-67467-4 978-3-319-67468-1
BNAIC 2016: Artificial Intelligence
Cham
Springer International Publishing
Bosse
Tibor
Bredeweg
Bert
van den Bergh
Mark J. H.
Hommelberg
Anne
Kosters
Walter A.
Spieksma
Flora M.
Aspects of the Cooperative Card Game Hanabi
2017
DOI.org (Crossref)
http://link.springer.com/10.1007/978-3-319-67468-1_7
2020-07-21 11:02:26
Series Title: Communications in Computer and Information Science
DOI: 10.1007/978-3-319-67468-1_7
93-105
attachment
Full Text
https://twin.sci-hub.se/6548/49fca9bfed767f739defcd030c004bdb/vandenbergh2017.pdf#view=FitH
2020-07-21 11:02:31
1
application/pdf
bookSection
10664
ISBN 978-3-319-71648-0 978-3-319-71649-7
Advances in Computer Games
Cham
Springer International Publishing
Winands
Mark H.M.
van den Herik
H. Jaap
Kosters
Walter A.
Bouzy
Bruno
Playing Hanabi Near-Optimally
2017
DOI.org (Crossref)
http://link.springer.com/10.1007/978-3-319-71649-7_5
2020-07-21 11:02:53
Series Title: Lecture Notes in Computer Science
DOI: 10.1007/978-3-319-71649-7_5
51-62
conferencePaper
ISBN 978-1-5386-3233-8
2017 IEEE Conference on Computational Intelligence and Games (CIG)
DOI 10.1109/CIG.2017.8080417
New York, NY, USA
IEEE
Eger
Markus
Martens
Chris
Cordoba
Marcela Alfaro
An intentional AI for hanabi
8/2017
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/8080417/
2020-07-21 11:03:36
68-75
2017 IEEE Conference on Computational Intelligence and Games (CIG)
attachment
Full Text
https://zero.sci-hub.se/6752/bcf6e994ee7503ab821bd67848727b05/eger2017.pdf#view=FitH
2020-07-21 11:03:40
1
application/pdf
conferencePaper
Osawa
Hirotaka
Solving Hanabi: Estimating Hands by Opponent's Actions in Cooperative Game with Incomplete Information
A unique behavior of humans is modifying one’s unobservable behavior based on the reaction of others for cooperation. We used a card game called Hanabi as an evaluation task of imitating human reflective intelligence with artificial intelligence. Hanabi is a cooperative card game with incomplete information. A player cooperates with an opponent in building several card sets constructed with the same color and ordered numbers. However, like a blind man's bluff, each player sees the cards of all other players except his/her own. Also, communication between players is restricted to information about the same numbers and colors, and the player is required to read his/his opponent's intention with the opponent's hand, estimate his/her cards with incomplete information, and play one of them for building a set. We compared human play with several simulated strategies. The results indicate that the strategy with feedbacks from simulated opponent's viewpoints achieves more score than other strategies.
2015
https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10167
AAAI Workshops
attachment
Osawa - 2015 - Solving Hanabi Estimating Hands by Opponent's Act.pdf
application/pdf
journalArticle
Cape Cod
Eger
Markus
Martens
Chris
A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs
2017
en
Zotero
http://fdg2017.org/papers/FDG2017_demo_Hanabi.pdf
4
attachment
Eger and Martens - 2017 - A Browser-based Interface for the Exploration and .pdf
application/pdf
journalArticle
Gottwald
Eva Tallula
Eger
Markus
Martens
Chris
I see what you see: Integrating eye tracking into Hanabi playing agents
Humans’ eye movements convey a lot of information about their intentions, often unconsciously. Intelligent agents that cooperate with humans in various domains can benefit from interpreting this information. This paper contains a preliminary look at how eye tracking could be useful for agents that play the cooperative card game Hanabi with human players. We outline several situations in which an AI agent can utilize gaze information, and present an outlook on how we plan to integrate this with reimplementations of contemporary Hanabi agents.
en
Zotero
4
attachment
Gottwald et al. - I see what you see Integrating eye tracking into .pdf
application/pdf
computerProgram
State of the art Hanabi bots + simulation framework in rust
https://github.com/WuTheFWasThat/hanabi.rs
computerProgram
A strategy simulator for the well-known cooperative card game Hanabi
https://github.com/rjtobin/HanSim
computerProgram
A framework for writing bots that play Hanabi
https://github.com/Quuxplusone/Hanabi
journalArticle
Ludic Language Pedagogy
Ludic Language Pedagogy
deHaan
Jonathan
Jidoukan Jenga: Teaching English through remixing games and game rules
Let students play simple games in their L1. It’s ok!
Then:
You, the teacher, can help them critique the game in their L2.
You, the teacher, can help them change the game in their L2.
You, the teacher, can help them develop themselves.
#dropthestick #dropthecarrot #bringmeaning
2020-04-15
Teaching English through remixing games and game rules
https://www.llpjournal.org/2020/04/13/jidokan-jenga.html
📍 What is this? This is a recollection of a short lesson with some children. I used Jenga and a dictionary.
📍 Why did you make it? I want to show language teachers that simple games, and playing simple games in students’ first language can be a great foundation for helping students learn new vocabulary, think critically, and exercise creativity.
📍 Why is it radical? I taught using a simple board game (at a time when video games are over-focused on in research). I show what the learning looks like (I include a photo). The teaching and learning didn’t occur in a laboratory setting, but in the wild (in a community center). I focused on the learning around games.
📍 Who is it for? Language teachers can easily implement this lesson using Jenga or any other game. Language researchers can expand on the translating and remixing potential around games.
attachment
deHaan - 2020 - Jidoukan Jenga Teaching English through remixing .pdf
application/pdf
journalArticle
Heron
Michael James
Belford
Pauline Helen
Reid
Hayley
Crabb
Michael
Meeple Centred Design: A Heuristic Toolkit for Evaluating the Accessibility of Tabletop Games
6/2018
en
Meeple Centred Design
DOI.org (Crossref)
http://link.springer.com/10.1007/s40869-018-0057-8
2020-07-28 09:08:52
97-114
7
The Computer Games Journal
DOI 10.1007/s40869-018-0057-8
2
Comput Game J
ISSN 2052-773X
attachment
Full Text
https://link.springer.com/content/pdf/10.1007/s40869-018-0057-8.pdf
2020-07-28 09:08:55
1
application/pdf
journalArticle
7
The Computer Games Journal
DOI 10.1007/s40869-018-0056-9
2
Comput Game J
ISSN 2052-773X
Heron
Michael James
Belford
Pauline Helen
Reid
Hayley
Crabb
Michael
Eighteen Months of Meeple Like Us: An Exploration into the State of Board Game Accessibility
6/2018
en
Eighteen Months of Meeple Like Us
DOI.org (Crossref)
http://link.springer.com/10.1007/s40869-018-0056-9
2020-07-28 09:09:05
75-95
attachment
Full Text
https://link.springer.com/content/pdf/10.1007/s40869-018-0056-9.pdf
2020-07-28 09:09:08
1
application/pdf
thesis
Utrecht University
Andel
Daniël
On the complexity of Hive
It is shown that for an arbitrary position of a Hive game where both players have the same set of N pieces it is PSPACE-hard to determine whether one of the players has a winning strategy. The proof is done by reducing the known PSPACE-complete set of true quantified boolean formulas to a game concerning these formulas, then to the game generalised geography, then to a version of that game with the restriction of having only nodes with maximum degree 3, and finally to generalised Hive. This thesis includes a short introduction to the subject of computational complexity.
May 2020
en-US
On the complexity of Hive
https://dspace.library.uu.nl/handle/1874/396955
33
Bachelor thesis
attachment
Andel - 2020 - On the complexity of Hive.pdf
application/pdf
journalArticle
arXiv:2010.00048 [cs]
Kunda
Maithilee
Rabkina
Irina
Computer Science - Artificial Intelligence
Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game
We propose a new class of "grand challenge" AI problems that we call creative captioning---generating clever, interesting, or abstract captions for images, as well as understanding such captions. Creative captioning draws on core AI research areas of vision, natural language processing, narrative reasoning, and social reasoning, and across all these areas, it requires sophisticated uses of common sense and cultural knowledge. In this paper, we analyze several specific research problems that fall under creative captioning, using the popular board game Dixit as both inspiration and proposed testing ground. We expect that Dixit could serve as an engaging and motivating benchmark for creative captioning across numerous AI research communities for the coming 1-2 decades.
2020-09-30
Creative Captioning
arXiv.org
http://arxiv.org/abs/2010.00048
2020-10-12 04:03:28
arXiv: 2010.00048
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2010.00048.pdf
2020-10-12 04:03:46
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2010.00048
2020-10-12 04:03:53
1
text/html
computerProgram
Shobu AI Playground
https://github.com/JayWalker512/Shobu
webpage
Shobu randomly played games dataset
https://www.kaggle.com/bsfoltz/shobu-randomly-played-games-104k
conferencePaper
ISBN 978-1-4503-5319-9
Proceedings of the International Conference on the Foundations of Digital Games - FDG '17
DOI 10.1145/3102071.3102105
Hyannis, Massachusetts
ACM Press
de Mesentier Silva
Fernando
Lee
Scott
Togelius
Julian
Nealen
Andy
AI-based playtesting of contemporary board games
2017
en
DOI.org (Crossref)
http://dl.acm.org/citation.cfm?doid=3102071.3102105
2020-10-12 04:09:30
1-10
the International Conference
attachment
Full Text
https://twin.sci-hub.se/6553/d80b9cdf7f993e1137d0b129dec94e6d/demesentiersilva2017.pdf#view=FitH
2020-10-12 04:09:38
1
application/pdf
attachment
PDF
http://game.engineering.nyu.edu/wp-content/uploads/2017/06/ticket-ride-fdg2017-camera-ready.pdf
2020-10-12 04:13:00
3
computerProgram
Copley
Rowan
Materials for Ticket to Ride Seattle and a framework for making more game boards
https://github.com/dovinmu/ttr_generator
report
Nguyen
Cuong
Dinjian
Daniel
The Difficulty of Learning Ticket to Ride
Ticket to Ride is a very popular, award-winning board-game where you try toscore the most points while building a railway spanning cities in America. For acomputer to learn to play this game is very difficult due to the vast state-actionspace. This project will explain why featurizing your state, and implementingcurriculum learning can help agents learn as state-action spaces grow too largefor traditional learning methods to be effective.
https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf
attachment
Full Text
https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL/reports/Nguyen_Dinjian_report.pdf
2021-07-24 08:19:13
1
application/pdf
conferencePaper
ISBN 978-1-4503-6571-0
Proceedings of the 13th International Conference on the Foundations of Digital Games
DOI 10.1145/3235765.3235813
Malmö Sweden
ACM
de Mesentier Silva
Fernando
Lee
Scott
Togelius
Julian
Nealen
Andy
Evolving maps and decks for ticket to ride
2018-08-07
en
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3235765.3235813
2020-10-12 04:12:33
1-7
FDG '18: Foundations of Digital Games 2018
attachment
Full Text
https://twin.sci-hub.se/7128/24e28b0429626f565aafd93768332e73/demesentiersilva2018.pdf#view=FitH
2020-10-12 04:12:36
1
application/pdf
journalArticle
arXiv:2008.07079 [cs, stat]
Gendre
Quentin
Kaneko
Tomoyuki
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Statistics - Machine Learning
Playing Catan with Cross-dimensional Neural Network
Catan is a strategic board game having interesting properties, including multi-player, imperfect information, stochastic, complex state space structure (hexagonal board where each vertex, edge and face has its own features, cards for each player, etc), and a large action space (including negotiation). Therefore, it is challenging to build AI agents by Reinforcement Learning (RL for short), without domain knowledge nor heuristics. In this paper, we introduce cross-dimensional neural networks to handle a mixture of information sources and a wide variety of outputs, and empirically demonstrate that the network dramatically improves RL in Catan. We also show that, for the first time, a RL agent can outperform jsettler, the best heuristic agent available.
2020-08-17
arXiv.org
http://arxiv.org/abs/2008.07079
2020-10-12 04:19:57
arXiv: 2008.07079
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2008.07079.pdf
2020-10-12 04:20:04
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2008.07079
2020-10-12 04:20:10
1
text/html
conferencePaper
ISBN 978-1-4503-8878-8
11th Hellenic Conference on Artificial Intelligence
DOI 10.1145/3411408.3411413
Athens Greece
ACM
Theodoridis
Alexios
Chalkiadakis
Georgios
Monte Carlo Tree Search for the Game of Diplomacy
2020-09-02
en
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3411408.3411413
2020-10-12 04:20:38
16-25
SETN 2020: 11th Hellenic Conference on Artificial Intelligence
journalArticle
Eger
Markus
Martens
Chris
Sauma Chacon
Pablo
Alfaro Cordoba
Marcela
Hidalgo Cespedes
Jeisson
Operationalizing Intentionality to Play Hanabi with Human Players
2020
DOI.org (Crossref)
https://ieeexplore.ieee.org/document/9140404/
2020-11-26 08:48:44
1-1
IEEE Transactions on Games
DOI 10.1109/TG.2020.3009359
IEEE Trans. Games
ISSN 2475-1502, 2475-1510
attachment
Full Text
https://sci-hub.se/downloads/2020-08-17/f1/eger2020.pdf?rand=5fbf6bef76c6b#view=FitH
2020-11-26 08:48:52
1
application/pdf
journalArticle
16
Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
1
AIIDE
Canaan
Rodrigo
Gao
Xianbo
Chung
Youjin
Togelius
Julian
Nealen
Andy
Menzel
Stefan
Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents
<p class="abstract">Hanabi is a multiplayer cooperative card game, where only your partners know your cards. All players succeed or fail together. This makes the game an excellent testbed for studying collaboration. Recently, it has been shown that deep neural networks can be trained through self-play to play the game very well. However, such agents generally do not play well with others. In this paper, we investigate the consequences of training Rainbow DQN agents with human-inspired rule-based agents. We analyze with which agents Rainbow agents learn to play well, and how well playing skill transfers to agents they were not trained with. We also analyze patterns of communication between agents to elucidate how collaboration happens. A key finding is that while most agents only learn to play well with partners seen during training, one particular agent leads the Rainbow algorithm towards a much more general policy. The metrics and hypotheses advanced in this paper can be used for further study of collaborative agents.</p>
October 1, 2020
https://ojs.aaai.org/index.php/AIIDE/article/view/7404
2020-11-26
Section: Full Oral Papers
31-37
attachment
View PDF
https://ojs.aaai.org/index.php/AIIDE/article/view/7404/7333
2020-11-26 08:52:38
3
conferencePaper
2020第82回全国大会講演論文集
ひい
とう
市来
正裕
中里
研一
Playing mini-Hanabi card game with Q-learning
February 2020
http://id.nii.ac.jp/1001/00205046/
Issue: 1
41–42
attachment
View PDF
https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_uri&item_id=205142&file_id=1&file_no=1
2020-11-26 08:54:47
3
journalArticle
arXiv:2005.07156 [cs]
Reinhardt
Jack
Computer Science - Artificial Intelligence
Computer Science - Multiagent Systems
Competing in a Complex Hidden Role Game with Information Set Monte Carlo Tree Search
Advances in intelligent game playing agents have led to successes in perfect information games like Go and imperfect information games like Poker. The Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms outperforms previous algorithms using Monte Carlo methods in imperfect information games. In this paper, Single Observer Information Set Monte Carlo Tree Search (SO-ISMCTS) is applied to Secret Hitler, a popular social deduction board game that combines traditional hidden role mechanics with the randomness of a card deck. This combination leads to a more complex information model than the hidden role and card deck mechanics alone. It is shown in 10108 simulated games that SO-ISMCTS plays as well as simpler rule based agents, and demonstrates the potential of ISMCTS algorithms in complicated information set domains.
2020-05-14
arXiv.org
http://arxiv.org/abs/2005.07156
2020-11-26 09:00:33
arXiv: 2005.07156
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2005.07156.pdf
2020-11-26 09:01:03
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2005.07156
2020-11-26 09:01:10
1
text/html
journalArticle
arXiv:2009.12974 [cs]
Ameneyro
Fred Valdez
Galvan
Edgar
Morales
Anger Fernando Kuri
Computer Science - Artificial Intelligence
Playing Carcassonne with Monte Carlo Tree Search
Monte Carlo Tree Search (MCTS) is a relatively new sampling method with multiple variants in the literature. They can be applied to a wide variety of challenging domains including board games, video games, and energy-based problems to mention a few. In this work, we explore the use of the vanilla MCTS and the MCTS with Rapid Action Value Estimation (MCTS-RAVE) in the game of Carcassonne, a stochastic game with a deceptive scoring system where limited research has been conducted. We compare the strengths of the MCTS-based methods with the Star2.5 algorithm, previously reported to yield competitive results in the game of Carcassonne when a domain-specific heuristic is used to evaluate the game states. We analyse the particularities of the strategies adopted by the algorithms when they share a common reward system. The MCTS-based methods consistently outperformed the Star2.5 algorithm given their ability to find and follow long-term strategies, with the vanilla MCTS exhibiting a more robust game-play than the MCTS-RAVE.
2020-10-04
arXiv.org
http://arxiv.org/abs/2009.12974
2021-01-02 18:13:09
arXiv: 2009.12974
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2009.12974.pdf
2021-01-02 18:13:12
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2009.12974
2021-01-02 18:13:17
1
text/html
journalArticle
arXiv:2007.15895 [cs]
Tanaka
Satoshi
Bonnet
François
Tixeuil
Sébastien
Tamura
Yasumasa
Computer Science - Computer Science and Game Theory
Quixo Is Solved
Quixo is a two-player game played on a 5$\times$5 grid where the players try to align five identical symbols. Specifics of the game require the usage of novel techniques. Using a combination of value iteration and backward induction, we propose the first complete analysis of the game. We describe memory-efficient data structures and algorithmic optimizations that make the game solvable within reasonable time and space constraints. Our main conclusion is that Quixo is a Draw game. The paper also contains the analysis of smaller boards and presents some interesting states extracted from our computations.
2020-07-31
arXiv.org
http://arxiv.org/abs/2007.15895
2021-01-02 18:17:10
arXiv: 2007.15895
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2007.15895.pdf
2021-01-02 18:17:17
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2007.15895
2021-01-02 18:17:21
1
text/html
journalArticle
arXiv:2006.02353 [cs]
Bertholon
Guillaume
Géraud-Stewart
Rémi
Kugelmann
Axel
Lenoir
Théo
Naccache
David
Computer Science - Computer Science and Game Theory
At Most 43 Moves, At Least 29: Optimal Strategies and Bounds for Ultimate Tic-Tac-Toe
Ultimate Tic-Tac-Toe is a variant of the well known tic-tac-toe (noughts and crosses) board game. Two players compete to win three aligned "fields", each of them being a tic-tac-toe game. Each move determines which field the next player must play in. We show that there exist a winning strategy for the first player, and therefore that there exist an optimal winning strategy taking at most 43 moves; that the second player can hold on at least 29 rounds; and identify any optimal strategy's first two moves.
2020-06-06
At Most 43 Moves, At Least 29
arXiv.org
http://arxiv.org/abs/2006.02353
2021-01-02 18:17:55
arXiv: 2006.02353
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2006.02353.pdf
2021-01-02 18:17:57
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2006.02353
2021-01-02 18:18:02
1
text/html
journalArticle
arXiv:2004.00377 [cs]
Muller-Brockhausen
Matthias
Preuss
Mike
Plaat
Aske
Computer Science - Artificial Intelligence
A New Challenge: Approaching Tetris Link with AI
Decades of research have been invested in making computer programs for playing games such as Chess and Go. This paper focuses on a new game, Tetris Link, a board game that is still lacking any scientific analysis. Tetris Link has a large branching factor, hampering a traditional heuristic planning approach. We explore heuristic planning and two other approaches: Reinforcement Learning, Monte Carlo tree search. We document our approach and report on their relative performance in a tournament. Curiously, the heuristic approach is stronger than the planning/learning approaches. However, experienced human players easily win the majority of the matches against the heuristic planning AIs. We, therefore, surmise that Tetris Link is more difficult than expected. We offer our findings to the community as a challenge to improve upon.
2020-04-01
A New Challenge
arXiv.org
http://arxiv.org/abs/2004.00377
2021-01-02 18:18:26
arXiv: 2004.00377
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2004.00377.pdf
2021-01-02 18:18:32
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2004.00377
2021-01-02 18:18:38
1
text/html
journalArticle
arXiv:1511.08099 [cs]
Cuayáhuitl
Heriberto
Keizer
Simon
Lemon
Oliver
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
2015-11-25
arXiv.org
http://arxiv.org/abs/1511.08099
2021-01-02 18:29:38
arXiv: 1511.08099
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1511.08099.pdf
2021-01-02 18:29:43
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1511.08099
2021-01-02 18:29:50
1
text/html
conferencePaper
Applying Neural Networks and Genetic Programming to the Game Lost Cities
https://minds.wisconsin.edu/bitstream/handle/1793/79080/LydeenSpr18.pdf?sequence=1&isAllowed=y
attachment
LydeenSpr18.pdf
https://minds.wisconsin.edu/bitstream/handle/1793/79080/LydeenSpr18.pdf
2021-06-12 17:03:24
3
report
A summary of a dissertation on Azul
https://old.reddit.com/r/boardgames/comments/hxodaf/update_i_wrote_my_dissertation_on_azul/
conferencePaper
Ceramic: A research environment based on the multi-player strategic board game Azul
https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=207669&item_no=1&attribute_id=1&file_no=1
computerProgram
Ceramic: A research environment based on the multi-player strategic board game Azul
https://github.com/Swynfel/ceramic
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718426
Kyoto, Japan
IEEE
Yoza
Takashi
Moriwaki
Retsu
Torigai
Yuki
Kamikubo
Yuki
Kubota
Takayuki
Watanabe
Takahiro
Fujimori
Takumi
Ito
Hiroyuki
Seo
Masato
Akagi
Kouta
Yamaji
Yuichiro
Watanabe
Minoru
FPGA Blokus Duo Solver using a massively parallel architecture
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718426/
2021-06-28 14:38:57
494-497
2013 International Conference on Field-Programmable Technology (FPT)
attachment
Full Text
https://zero.sci-hub.se/2654/a4d3e713290066b6db7db1d9eedd194e/yoza2013.pdf#view=FitH
2021-06-28 14:39:08
1
application/pdf
conferencePaper
ISBN 978-1-4799-0565-2 978-1-4799-0562-1 978-1-4799-0563-8
The 17th CSI International Symposium on Computer Architecture & Digital Systems (CADS 2013)
DOI 10.1109/CADS.2013.6714256
Tehran, Iran
IEEE
Jahanshahi
Ali
Taram
Mohammad Kazem
Eskandari
Nariman
Blokus Duo game on FPGA
10/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6714256/
2021-06-28 14:39:04
149-152
2013 17th CSI International Symposium on Computer Architecture and Digital Systems (CADS)
attachment
Full Text
https://zero.sci-hub.se/3228/9ae6ca1efab5a2ebb63dd4e22a13bf04/jahanshahi2013.pdf#view=FitH
2021-06-28 14:39:07
1
application/pdf
journalArticle
The World Wide Web Conference
DOI 10.1145/3308558.3314131
Hsu
Chao-Chun
Chen
Yu-Hua
Chen
Zi-Yuan
Lin
Hsin-Yu
Huang
Ting-Hao 'Kenneth'
Ku
Lun-Wei
Computer Science - Computation and Language
Dixit: Interactive Visual Storytelling via Term Manipulation
In this paper, we introduce Dixit, an interactive visual storytelling system that the user interacts with iteratively to compose a short story for a photo sequence. The user initiates the process by uploading a sequence of photos. Dixit first extracts text terms from each photo which describe the objects (e.g., boy, bike) or actions (e.g., sleep) in the photo, and then allows the user to add new terms or remove existing terms. Dixit then generates a short story based on these terms. Behind the scenes, Dixit uses an LSTM-based model trained on image caption data and FrameNet to distill terms from each image and utilizes a transformer decoder to compose a context-coherent story. Users change images or terms iteratively with Dixit to create the most ideal story. Dixit also allows users to manually edit and rate stories. The proposed procedure opens up possibilities for interpretable and controllable visual storytelling, allowing users to understand the story formation rationale and to intervene in the generation process.
2019-05-13
Dixit
arXiv.org
http://arxiv.org/abs/1903.02230
2021-06-28 14:40:29
arXiv: 1903.02230
3531-3535
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1903.02230.pdf
2021-06-28 14:40:38
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1903.02230
2021-06-28 14:40:43
1
text/html
computerProgram
Dominion Simulator
https://dominionsimulator.wordpress.com/f-a-q/
computerProgram
Dominion Simulator Source Code
https://github.com/mikemccllstr/dominionstats/
blogPost
Best and worst openings in Dominion
http://councilroom.com/openings
blogPost
Optimal Card Ratios in Dominion
http://councilroom.com/optimal_card_ratios
blogPost
Card Winning Stats on Dominion Server
http://councilroom.com/supply_win
forumPost
Dominion Strategy Forum
http://forum.dominionstrategy.com/index.php
journalArticle
arXiv:1811.11273 [cs]
Bendekgey
Henry
Computer Science - Artificial Intelligence
Clustering Player Strategies from Variable-Length Game Logs in Dominion
We present a method for encoding game logs as numeric features in the card game Dominion. We then run the manifold learning algorithm t-SNE on these encodings to visualize the landscape of player strategies. By quantifying game states as the relative prevalence of cards in a player's deck, we create visualizations that capture qualitative differences in player strategies. Different ways of deviating from the starting game state appear as different rays in the visualization, giving it an intuitive explanation. This is a promising new direction for understanding player strategies across games that vary in length.
2018-12-12
arXiv.org
http://arxiv.org/abs/1811.11273
2021-06-28 14:43:21
arXiv: 1811.11273
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1811.11273.pdf
2021-06-28 14:43:27
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1811.11273
2021-06-28 14:43:31
1
text/html
computerProgram
Hanabi Open Agent Dataset
https://github.com/aronsar/hoad
conferencePaper
Hanabi Open Agent Dataset
https://dl.acm.org/doi/10.5555/3463952.3464188
journalArticle
arXiv:2010.02923 [cs]
Gray
Jonathan
Lerer
Adam
Bakhtin
Anton
Brown
Noam
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Computer Science - Computer Science and Game Theory
Human-Level Performance in No-Press Diplomacy via Equilibrium Search
Prior AI breakthroughs in complex games have focused on either the purely adversarial or purely cooperative settings. In contrast, Diplomacy is a game of shifting alliances that involves both cooperation and competition. For this reason, Diplomacy has proven to be a formidable research challenge. In this paper we describe an agent for the no-press variant of Diplomacy that combines supervised learning on human data with one-step lookahead search via regret minimization. Regret minimization techniques have been behind previous AI successes in adversarial games, most notably poker, but have not previously been shown to be successful in large-scale games involving cooperation. We show that our agent greatly exceeds the performance of past no-press Diplomacy bots, is unexploitable by expert humans, and ranks in the top 2% of human players when playing anonymous games on a popular Diplomacy website.
2021-05-03
arXiv.org
http://arxiv.org/abs/2010.02923
2021-06-28 15:28:02
arXiv: 2010.02923
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2010.02923.pdf
2021-06-28 15:28:18
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2010.02923
2021-06-28 15:28:22
1
text/html
journalArticle
arXiv:1708.01503 [math]
Akiyama
Rika
Abe
Nozomi
Fujita
Hajime
Inaba
Yukie
Hataoka
Mari
Ito
Shiori
Seita
Satomi
55A20 (Primary), 05A99 (Secondary)
Mathematics - Combinatorics
Mathematics - Geometric Topology
Mathematics - History and Overview
Maximum genus of the Jenga like configurations
We treat the boundary of the union of blocks in the Jenga game as a surface with a polyhedral structure and consider its genus. We generalize the game and determine the maximum genus of the generalized game.
2018-08-31
arXiv.org
http://arxiv.org/abs/1708.01503
2021-06-28 15:28:12
arXiv: 1708.01503
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1708.01503.pdf
2021-06-28 15:28:21
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1708.01503
2021-06-28 15:28:24
1
text/html
journalArticle
arXiv:1905.08617 [cs]
Bai
Chongyang
Bolonkin
Maksim
Burgoon
Judee
Chen
Chao
Dunbar
Norah
Singh
Bharat
Subrahmanian
V. S.
Wu
Zhe
Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Automatic Long-Term Deception Detection in Group Interaction Videos
Most work on automated deception detection (ADD) in video has two restrictions: (i) it focuses on a video of one person, and (ii) it focuses on a single act of deception in a one or two minute video. In this paper, we propose a new ADD framework which captures long term deception in a group setting. We study deception in the well-known Resistance game (like Mafia and Werewolf) which consists of 5-8 players of whom 2-3 are spies. Spies are deceptive throughout the game (typically 30-65 minutes) to keep their identity hidden. We develop an ensemble predictive model to identify spies in Resistance videos. We show that features from low-level and high-level video analysis are insufficient, but when combined with a new class of features that we call LiarRank, produce the best results. We achieve AUCs of over 0.70 in a fully automated setting. Our demo can be found at http://home.cs.dartmouth.edu/~mbolonkin/scan/demo/
2019-06-15
arXiv.org
http://arxiv.org/abs/1905.08617
2021-06-28 15:32:49
arXiv: 1905.08617
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1905.08617.pdf
2021-06-28 15:32:54
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1905.08617
2021-06-28 15:32:58
1
text/html
bookSection
10068
ISBN 978-3-319-50934-1 978-3-319-50935-8
Computers and Games
Cham
Springer International Publishing
Plaat
Aske
Kosters
Walter
van den Herik
Jaap
Bi
Xiaoheng
Tanaka
Tetsuro
Human-Side Strategies in the Werewolf Game Against the Stealth Werewolf Strategy
2016
DOI.org (Crossref)
http://link.springer.com/10.1007/978-3-319-50935-8_9
2021-06-28 15:32:54
Series Title: Lecture Notes in Computer Science
DOI: 10.1007/978-3-319-50935-8_9
93-102
attachment
Full Text
https://sci-hub.se/downloads/2019-01-26//f7/bi2016.pdf#view=FitH
2021-06-28 15:33:08
1
application/pdf
journalArticle
arXiv:0804.0071 [math]
Yao
Erlin
65C20
91-01
Mathematics - Probability
A Theoretical Study of Mafia Games
Mafia can be described as an experiment in human psychology and mass hysteria, or as a game between informed minority and uninformed majority. Focus on a very restricted setting, Mossel et al. [to appear in Ann. Appl. Probab. Volume 18, Number 2] showed that in the mafia game without detectives, if the civilians and mafias both adopt the optimal randomized strategy, then the two groups have comparable probabilities of winning exactly when the total player size is R and the mafia size is of order Sqrt(R). They also proposed a conjecture which stated that this phenomenon should be valid in a more extensive framework. In this paper, we first indicate that the main theorem given by Mossel et al. [to appear in Ann. Appl. Probab. Volume 18, Number 2] can not guarantee their conclusion, i.e., the two groups have comparable winning probabilities when the mafia size is of order Sqrt(R). Then we give a theorem which validates the correctness of their conclusion. In the last, by proving the conjecture proposed by Mossel et al. [to appear in Ann. Appl. Probab. Volume 18, Number 2], we generalize the phenomenon to a more extensive framework, of which the mafia game without detectives is only a special case.
2008-04-01
arXiv.org
http://arxiv.org/abs/0804.0071
2021-06-28 15:33:04
arXiv: 0804.0071
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/0804.0071.pdf
2021-06-28 15:33:07
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/0804.0071
2021-06-28 15:33:10
1
text/html
bookSection
11302
ISBN 978-3-030-04178-6 978-3-030-04179-3
Neural Information Processing
Cham
Springer International Publishing
Cheng
Long
Leung
Andrew Chi Sing
Ozawa
Seiichi
Zilio
Felipe
Prates
Marcelo
Lamb
Luis
Neural Networks Models for Analyzing Magic: The Gathering Cards
2018
Neural Networks Models for Analyzing Magic
DOI.org (Crossref)
http://link.springer.com/10.1007/978-3-030-04179-3_20
2021-06-28 15:33:26
Series Title: Lecture Notes in Computer Science
DOI: 10.1007/978-3-030-04179-3_20
227-239
attachment
Submitted Version
https://arxiv.org/pdf/1810.03744
2021-06-28 15:33:36
1
application/pdf
conferencePaper
The Complexity of Deciding Legality of a Single Step of Magic: The Gathering
https://livrepository.liverpool.ac.uk/3029568/
conferencePaper
Magic: The Gathering in Common Lisp
https://vixra.org/abs/2001.0065
computerProgram
Magic: The Gathering in Common Lisp
https://github.com/jeffythedragonslayer/maglisp
thesis
Mathematical programming and Magic: The Gathering
https://commons.lib.niu.edu/handle/10843/19194
conferencePaper
Deck Construction Strategies for Magic: The Gathering
https://www.doi.org/10.1685/CSC06077
thesis
Deckbuilding in Magic: The Gathering Using a Genetic Algorithm
https://doi.org/11250/2462429
report
Magic: The Gathering Deck Performance Prediction
http://cs229.stanford.edu/proj2012/HauPlotkinTran-MagicTheGatheringDeckPerformancePrediction.pdf
computerProgram
A constraint programming based solver for Modern Art
https://github.com/captn3m0/modernart
journalArticle
arXiv:2103.00683 [cs]
Haliem
Marina
Bonjour
Trevor
Alsalem
Aala
Thomas
Shilpa
Li
Hongyu
Aggarwal
Vaneet
Bhargava
Bharat
Kejriwal
Mayank
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement Learning and Imitation Learning Approach
Learning how to adapt and make real-time informed decisions in dynamic and complex environments is a challenging problem. To learn this task, Reinforcement Learning (RL) relies on an agent interacting with an environment and learning through trial and error to maximize the cumulative sum of rewards received by it. In multi-player Monopoly game, players have to make several decisions every turn which involves complex actions, such as making trades. This makes the decision-making harder and thus, introduces a highly complicated task for an RL agent to play and learn its winning strategies. In this paper, we introduce a Hybrid Model-Free Deep RL (DRL) approach that is capable of playing and learning winning strategies of the popular board game, Monopoly. To achieve this, our DRL agent (1) starts its learning process by imitating a rule-based agent (that resembles the human logic) to initialize its policy, (2) learns the successful actions, and improves its policy using DRL. Experimental results demonstrate an intelligent behavior of our proposed agent as it shows high win rates against different types of agent-players.
2021-02-28
Learning Monopoly Gameplay
arXiv.org
http://arxiv.org/abs/2103.00683
2021-06-28 15:48:08
arXiv: 2103.00683
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2103.00683.pdf
2021-06-28 15:48:19
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2103.00683
2021-06-28 15:48:23
1
text/html
conferencePaper
ISBN 978-0-7803-7203-0
Proceedings 2001 IEEE International Symposium on Computational Intelligence in Robotics and Automation (Cat. No.01EX515)
DOI 10.1109/CIRA.2001.1013210
Banff, Alta., Canada
IEEE
Yasumura
Y.
Oguchi
K.
Nitta
K.
Negotiation strategy of agents in the MONOPOLY game
2001
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/1013210/
2021-06-28 15:49:10
277-281
2001 International Symposium on Computational Intelligence in Robotics and Automation
attachment
Full Text
https://moscow.sci-hub.se/3317/19346a5b777c1582800b51ee3a7cf5ed/negotiation-strategy-of-agents-in-the-monopoly-game.pdf#view=FitH
2021-06-28 15:49:15
1
application/pdf
conferencePaper
ISBN 978-1-4673-1194-6 978-1-4673-1193-9 978-1-4673-1192-2
2012 IEEE Conference on Computational Intelligence and Games (CIG)
DOI 10.1109/CIG.2012.6374168
Granada, Spain
IEEE
Friberger
Marie Gustafsson
Togelius
Julian
Generating interesting Monopoly boards from open data
09/2012
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6374168/
2021-06-28 15:49:18
288-295
2012 IEEE Conference on Computational Intelligence and Games (CIG)
attachment
Submitted Version
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=81CA58D9ACCE8CA7412077093E520EFC?doi=10.1.1.348.6099&rep=rep1&type=pdf
2021-06-28 15:49:32
1
application/pdf
conferencePaper
ISBN 978-1-4244-5770-0 978-1-4244-5771-7
Proceedings of the 2009 Winter Simulation Conference (WSC)
DOI 10.1109/WSC.2009.5429349
Austin, TX, USA
IEEE
Friedman
Eric J.
Henderson
Shane G.
Byuen
Thomas
Gallardo
German Gutierrez
Estimating the probability that the game of Monopoly never ends
12/2009
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/5429349/
2021-06-28 15:49:23
380-391
2009 Winter Simulation Conference (WSC 2009)
attachment
Full Text
https://moscow.sci-hub.se/3233/bacac19e84c764b72c627d05f55c0ad9/friedman2009.pdf#view=FitH
2021-06-28 15:49:32
1
application/pdf
report
Learning to Play Monopoly with Monte Carlo Tree Search
https://project-archive.inf.ed.ac.uk/ug4/20181042/ug4_proj.pdf
conferencePaper
ISBN 978-1-72811-895-6
TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON)
DOI 10.1109/TENCON.2019.8929523
Kochi, India
IEEE
Arun
Edupuganti
Rajesh
Harikrishna
Chakrabarti
Debarka
Cherala
Harikiran
George
Koshy
Monopoly Using Reinforcement Learning
10/2019
DOI.org (Crossref)
https://ieeexplore.ieee.org/document/8929523/
2021-06-28 15:49:50
858-862
TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON)
attachment
Full Text
https://sci-hub.se/downloads/2020-04-10/35/arun2019.pdf?rand=60d9ef9f20b26#view=FitH
2021-06-28 15:50:07
1
application/pdf
report
A Markovian Exploration of Monopoly
https://pi4.math.illinois.edu/wp-content/uploads/2014/10/Gartland-Burson-Ferguson-Markovopoly.pdf
conferencePaper
Learning to play Monopoly: A Reinforcement Learning approach
https://intelligence.csd.auth.gr/publication/conference-papers/learning-to-play-monopoly-a-reinforcement-learning-approach/
presentation
What’s the Best Monopoly Strategy?
https://core.ac.uk/download/pdf/48614184.pdf
journalArticle
Nakai
Kenichiro
Takenaga
Yasuhiko
NP-Completeness of Pandemic
2012
en
DOI.org (Crossref)
https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_article
2021-06-28 15:59:47
723-726
20
Journal of Information Processing
DOI 10.2197/ipsjjip.20.723
3
Journal of Information Processing
ISSN 1882-6652
attachment
Full Text
https://www.jstage.jst.go.jp/article/ipsjjip/20/3/20_723/_pdf
2021-06-28 15:59:50
1
application/pdf
thesis
On Solving Pentago
http://www.ke.tu-darmstadt.de/lehre/arbeiten/bachelor/2011/Buescher_Niklas.pdf
journalArticle
arXiv:1906.02330 [cs, stat]
Serrino
Jack
Kleiman-Weiner
Max
Parkes
David C.
Tenenbaum
Joshua B.
Computer Science - Machine Learning
Statistics - Machine Learning
Computer Science - Multiagent Systems
Finding Friend and Foe in Multi-Agent Games
Recent breakthroughs in AI for multi-agent games like Go, Poker, and Dota, have seen great strides in recent years. Yet none of these games address the real-life challenge of cooperation in the presence of unknown and uncertain teammates. This challenge is a key game mechanism in hidden role games. Here we develop the DeepRole algorithm, a multi-agent reinforcement learning agent that we test on The Resistance: Avalon, the most popular hidden role game. DeepRole combines counterfactual regret minimization (CFR) with deep value networks trained through self-play. Our algorithm integrates deductive reasoning into vector-form CFR to reason about joint beliefs and deduce partially observable actions. We augment deep value networks with constraints that yield interpretable representations of win probabilities. These innovations enable DeepRole to scale to the full Avalon game. Empirical game-theoretic methods show that DeepRole outperforms other hand-crafted and learned agents in five-player Avalon. DeepRole played with and against human players on the web in hybrid human-agent teams. We find that DeepRole outperforms human players as both a cooperator and a competitor.
2019-06-05
arXiv.org
http://arxiv.org/abs/1906.02330
2021-06-28 16:00:28
arXiv: 1906.02330
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1906.02330.pdf
2021-06-28 16:00:35
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1906.02330
2021-06-28 16:00:38
1
text/html
thesis
A Mathematical Analysis of the Game of Santorini
https://openworks.wooster.edu/independentstudy/8917/
computerProgram
A Mathematical Analysis of the Game of Santorini
https://github.com/carsongeissler/SantoriniIS
report
The complexity of Scotland Yard
https://eprints.illc.uva.nl/id/eprint/193/1/PP-2006-18.text.pdf
conferencePaper
ISBN 978-1-4799-3547-5
2014 IEEE Conference on Computational Intelligence and Games
DOI 10.1109/CIG.2014.6932907
Dortmund, Germany
IEEE
Szubert
Marcin
Jaskowski
Wojciech
Temporal difference learning of N-tuple networks for the game 2048
8/2014
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6932907/
2021-06-28 16:09:20
1-8
2014 IEEE Conference on Computational Intelligence and Games (CIG)
attachment
Submitted Version
https://www.cs.put.poznan.pl/mszubert/pub/szubert2014cig.pdf
2021-06-28 16:09:26
1
application/pdf
journalArticle
arXiv:1501.03837 [cs]
Abdelkader
Ahmed
Acharya
Aditya
Dasler
Philip
Computer Science - Computational Complexity
F.2.2
On the Complexity of Slide-and-Merge Games
We study the complexity of a particular class of board games, which we call `slide and merge' games. Namely, we consider 2048 and Threes, which are among the most popular games of their type. In both games, the player is required to slide all rows or columns of the board in one direction to create a high value tile by merging pairs of equal tiles into one with the sum of their values. This combines features from both block pushing and tile matching puzzles, like Push and Bejeweled, respectively. We define a number of natural decision problems on a suitable generalization of these games and prove NP-hardness for 2048 by reducing from 3SAT. Finally, we discuss the adaptation of our reduction to Threes and conjecture a similar result.
2015-01-15
arXiv.org
http://arxiv.org/abs/1501.03837
2021-06-28 16:09:34
arXiv: 1501.03837
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1501.03837.pdf
2021-06-28 16:09:48
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1501.03837
2021-06-28 16:09:52
1
text/html
journalArticle
DOI 10.4230/LIPICS.FUN.2016.1
Abdelkader
Ahmed
Acharya
Aditya
Dasler
Philip
Herbstritt
Marc
000 Computer science, knowledge, general works
Computer Science
2048 Without New Tiles Is Still Hard
2016
en
DOI.org (Datacite)
http://drops.dagstuhl.de/opus/volltexte/2016/5885/
2021-06-28 16:09:58
Artwork Size: 14 pages
Medium: application/pdf
Publisher: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany
14 pages
conferencePaper
MDA: A Formal Approach to Game Design and Game Research
https://aaai.org/Library/Workshops/2004/ws04-04-001.php
conferencePaper
6
ISBN 2342-9666
Think Design Play
DiGRA/Utrecht School of the Arts
Exploring anonymity in cooperative board games
This study was done as a part of a larger research project where the interest was on exploring if and how gameplay design could give informative principles to the design of educational activities. The researchers conducted a series of studies trying to map game mechanics that had the special quality of being inclusive, i.e., playable by a diverse group of players. This specific study focused on designing a cooperative board game with the goal of implementing anonymity as a game mechanic. Inspired by the gameplay design patterns methodology (Björk & Holopainen 2005a; 2005b; Holopainen & Björk 2008), mechanics from existing cooperative board games were extracted and analyzed in order to inform the design process. The results from prototyping and play testing indicated that it is possible to implement anonymous actions in cooperative board games and that this mechanic made rather unique forms of gameplay possible. These design patterns can be further developed in order to address inclusive educational practices.
January 2011
http://www.digra.org/digital-library/publications/exploring-anonymity-in-cooperative-board-games/
2011 DiGRA International Conference
journalArticle
arXiv:2107.07630 [cs]
Siu
Ho Chit
Pena
Jaime D.
Chang
Kimberlee C.
Chen
Edenna
Zhou
Yutai
Lopez
Victor J.
Palko
Kyle
Allen
Ross E.
Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi
Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance.
2021-07-19
arXiv.org
https://arxiv.org/abs/2107.07630
2021-07-24 06:30:44
arXiv: 2107.07630
attachment
86e8f7ab32cfd12577bc2619bc635690-Paper.pdf
https://papers.neurips.cc/paper/2021/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf
2022-01-11 07:50:59
3
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2107.07630.pdf
2021-07-24 06:31:01
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2107.07630
2021-07-24 06:31:06
1
text/html
journalArticle
Litwiller
Bonnie H.
Duncan
David R.
Probabilites In Yahtzee
Teachers of units in probability are often interested in providing examples of probabilistic situations in a nonclassroom setting. Games are a rich source of such probabilities. Many people enjoy playing a commercial game called Yahtzee. A Yahtzee player receives points for achieving various specified numerical combinations of five dice during the three rolls that constitute a turn.
12/1982
DOI.org (Crossref)
https://pubs.nctm.org/view/journals/mt/75/9/article-p751.xml
2021-07-24 07:53:57
751-754
75
The Mathematics Teacher
DOI 10.5951/MT.75.9.0751
9
MT
ISSN 0025-5769, 2330-0582
presentation
Verhoeff
Tom
Optimal Solitaire Yahtzee Strategies
http://www.yahtzee.org.uk/optimal_yahtzee_TV.pdf
journalArticle
Bonarini
Andrea
Lazaric
Alessandro
Restelli
Marcello
Yahtzee: a Large Stochastic Environment for RL Benchmarks
Yahtzee is a game that is regularly played by more than 100 million people in the world. We
propose a simplified version of Yahtzee as a benchmark for RL algorithms. We have already
used it for this purpose, and an implementation is available.
http://researchers.lille.inria.fr/~lazaric/Webpage/PublicationsByTopic_files/bonarini2005yahtzee.pdf
1
thesis
KTH, School of Computer Science and Communication (CSC)
Serra
Andreas
Niigata
Kai Widell
Optimal Yahtzee performance in multi-player games
Yahtzee is a game with a moderately large search space, dependent on the factor of luck. This makes it not quite trivial to implement an optimal strategy for it. Using the optimal strategy for single-player
use, comparisons against other algorithms are made and the results are analyzed for hints on what it could take to make an algorithm that could beat the single-player optimal strategy.
April 12, 2013
en
http://www.diva-portal.org/smash/get/diva2:668705/FULLTEXT01.pdf
https://www.csc.kth.se/utbildning/kth/kurser/DD143X/dkand13/Group4Per/report/12-serra-widell-nigata.pdf
17
Independent thesis Basic level (degree of Bachelor)
manuscript
Verhoeff
Tom
How to Maximize Your Score in Solitaire Yahtzee
Yahtzee is a well-known game played with five dice. Players take turns at assembling and scoring dice patterns. The player with the highest score wins. Solitaire Yahtzee is a single-player version of Yahtzee aimed at maximizing one’s score. A strategy for playing Yahtzee determines which choice to make in each situation of the game. We show that the maximum expected score over all Solitaire Yahtzee strategies is 254.5896. . . .
en
http://www-set.win.tue.nl/~wstomv/misc/yahtzee/yahtzee-report-unfinished.pdf
18
Incomplete Draft
thesis
Yale University, Department of Computer Science
Vasseur
Philip
Using Deep Q-Learning to Compare Strategy Ladders of Yahtzee
“Bots” playing games is not a new concept,
likely going back to the first video games. However,
there has been a new wave recently using machine
learning to learn to play games at a near optimal
level - essentially using neural networks to “solve”
games. Depending on the game, this can be relatively
straight forward using supervised learning. However,
this requires having data for optimal play, which is
often not possible due to the sheer complexity of many
games. For example, solitaire Yahtzee has this data
available, but two player Yahtzee does not due to the
massive state space. A recent trend in response to this
started with Google Deep Mind in 2013, who used Deep
Reinforcement Learning to play various Atari games
[4].
This project will apply Deep Reinforcement Learning
(specifically Deep Q-Learning) and measure how an
agent learns to play Yahtzee in the form of a strategy
ladder. A strategy ladder is a way of looking at how
the performance of an AI varies with the computational
resources it uses. Different sets of rules changes how the
the AI learns which varies the strategy ladder itself. This
project will vary the upper bonus threshold and then
attempt to measure how “good” the various strategy
ladders are - in essence attempting to find the set of
rules which creates the “best” version of Yahtzee. We
assume/expect that there is some correlation between
strategy ladders for AI and strategy ladders for human,
meaning that a game with a “good” strategy ladder for
an AI indicates that game is interesting and challenging
for humans.
December 12, 2019
en
https://raw.githubusercontent.com/philvasseur/Yahtzee-DQN-Thesis/dcf2bfe15c3b8c0ff3256f02dd3c0aabdbcbc9bb/webpage/final_report.pdf
12
report
KTH Royal Institute Of Technology Computer Science And Communication
Defensive Yahtzee
In this project an algorithm has been created that plays Yahtzee using rule
based heuristics. The focus is getting a high lowest score and a high 10th
percentile. All rules of Yahtzee and the probabilities for each combination
have been studied and based on this each turn is optimized to get a
guaranteed decent high score. The algorithm got a lowest score of 79 and a
10th percentile of 152 when executed 100 000 times.
https://www.diva-portal.org/smash/get/diva2:817838/FULLTEXT01.pdf
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168668
22
report
Glenn
James
An Optimal Strategy for Yahtzee
http://www.cs.loyola.edu/~jglenn/research/optimal_yahtzee.pdf
presentation
Middlebury College
R. Teal Witter
Alex Lyford
Applications of Graph Theory and Probability in the Board Game Ticket to Ride
January 16, 2020
https://www.rtealwitter.com/slides/2020-JMM.pdf
attachment
Full Text
https://www.rtealwitter.com/slides/2020-JMM.pdf
2021-07-24 08:18:37
1
application/pdf
conferencePaper
ISBN 978-92-837-2336-3
14th NATO Operations Research and Analysis (OR&A) Conference: Emerging and Disruptive Technology
DOI 10.14339/STO-MP-SAS-OCS-ORA-2020-WCM-01-PDF
NATO
Christoffer Limér
Erik Kalmér
Mika Cohen
Monte Carlo Tree Search for Risk
2/16/2021
en
AC/323(SAS-ACT)TP/1017
https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf
attachment
Full Text
https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01.pdf
2021-07-24 08:34:15
1
application/pdf
presentation
Christoffer Limér
Erik Kalmér
Wargaming with Monte-Carlo Tree Search
2/16/2021
en
https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf
attachment
Full Text
https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2020/MP-SAS-OCS-ORA-2020-WCM-01P.pdf
2021-07-24 08:35:04
1
application/pdf
journalArticle
arXiv:1910.04376 [cs]
Zha
Daochen
Lai
Kwei-Herng
Cao
Yuanpu
Huang
Songyi
Wei
Ruzhe
Guo
Junyu
Hu
Xia
Computer Science - Artificial Intelligence
RLCard: A Toolkit for Reinforcement Learning in Card Games
RLCard is an open-source toolkit for reinforcement learning research in card games. It supports various card environments with easy-to-use interfaces, including Blackjack, Leduc Hold'em, Texas Hold'em, UNO, Dou Dizhu and Mahjong. The goal of RLCard is to bridge reinforcement learning and imperfect information games, and push forward the research of reinforcement learning in domains with multiple agents, large state and action space, and sparse reward. In this paper, we provide an overview of the key components in RLCard, a discussion of the design principles, a brief introduction of the interfaces, and comprehensive evaluations of the environments. The codes and documents are available at https://github.com/datamllab/rlcard
2020-02-14
RLCard
arXiv.org
http://arxiv.org/abs/1910.04376
2021-07-24 08:40:55
arXiv: 1910.04376
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1910.04376.pdf
2021-07-24 08:40:59
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1910.04376
2021-07-24 08:41:03
1
text/html
journalArticle
arXiv:2009.12065 [cs]
Gaina
Raluca D.
Balla
Martin
Dockhorn
Alexander
Montoliu
Raul
Perez-Liebana
Diego
Computer Science - Artificial Intelligence
Design and Implementation of TAG: A Tabletop Games Framework
This document describes the design and implementation of the Tabletop Games framework (TAG), a Java-based benchmark for developing modern board games for AI research. TAG provides a common skeleton for implementing tabletop games based on a common API for AI agents, a set of components and classes to easily add new games and an import module for defining data in JSON format. At present, this platform includes the implementation of seven different tabletop games that can also be used as an example for further developments. Additionally, TAG also incorporates logging functionality that allows the user to perform a detailed analysis of the game, in terms of action space, branching factor, hidden information, and other measures of interest for Game AI research. The objective of this document is to serve as a central point where the framework can be described at length. TAG can be downloaded at: https://github.com/GAIGResearch/TabletopGames
2020-09-25
Design and Implementation of TAG
arXiv.org
http://arxiv.org/abs/2009.12065
2021-07-24 08:41:01
arXiv: 2009.12065
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2009.12065.pdf
2021-07-24 08:41:07
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2009.12065
2021-07-24 08:41:11
1
text/html
computerProgram
Adam Stelmaszczyk
Game Tree Search Algorithms - C++ library for AI bot programming.
2015
Game Tree Search Algorithms
https://github.com/AdamStelmaszczyk/gtsa
C++
computerProgram
Raluca D. Gaina
Martin Balla
Alexander Dockhorn
Raul Montoliu
Diego Perez-Liebana
TAG: Tabletop Games Framework
The Tabletop Games Framework (TAG) is a Java-based benchmark for developing modern board games for AI research. TAG provides a common skeleton for implementing tabletop games based on a common API for AI agents, a set of components and classes to easily add new games and an import module for defining data in JSON format. At present, this platform includes the implementation of seven different tabletop games that can also be used as an example for further developments. Additionally, TAG also incorporates logging functionality that allows the user to perform a detailed analysis of the game, in terms of action space, branching factor, hidden information, and other measures of interest for Game AI research.
https://github.com/GAIGResearch/TabletopGames
MIT License
Java
journalArticle
Osawa
Hirotaka
Kawagoe
Atsushi
Sato
Eisuke
Kato
Takuya
Emergence of Cooperative Impression With Self-Estimation, Thinking Time, and Concordance of Risk Sensitivity in Playing Hanabi
The authors evaluate the extent to which a user’s impression of an AI agent can be improved by giving the agent the ability of self-estimation, thinking time, and coordination of risk tendency. The authors modified the algorithm of an AI agent in the cooperative game Hanabi to have all of these traits, and investigated the change in the user’s impression by playing with the user. The authors used a self-estimation task to evaluate the effect that the ability to read the intention of a user had on an impression. The authors also show thinking time of an agent influences impression for an agent. The authors also investigated the relationship between the concordance of the risk-taking tendencies of players and agents, the player’s impression of agents, and the game experience. The results of the self-estimation task experiment showed that the more accurate the estimation of the agent’s self, the more likely it is that the partner will perceive humanity, affinity, intelligence, and communication skills in the agent. The authors also found that an agent that changes the length of thinking time according to the priority of action gives the impression that it is smarter than an agent with a normal thinking time when the player notices the difference in thinking time or an agent that randomly changes the thinking time. The result of the experiment regarding concordance of the risk-taking tendency shows that influence player’s impression toward agents. These results suggest that game agent designers can improve the player’s disposition toward an agent and the game experience by adjusting the agent’s self-estimation level, thinking time, and risk-taking tendency according to the player’s personality and inner state during the game.
2021-10-12
DOI.org (Crossref)
https://www.frontiersin.org/articles/10.3389/frobt.2021.658348/full
2021-11-24 07:14:38
658348
8
Frontiers in Robotics and AI
DOI 10.3389/frobt.2021.658348
Front. Robot. AI
ISSN 2296-9144
attachment
Full Text
https://www.frontiersin.org/articles/10.3389/frobt.2021.658348/pdf
2021-11-24 07:15:06
1
application/pdf
journalArticle
arXiv:2112.03178 [cs]
Schmid
Martin
Moravcik
Matej
Burch
Neil
Kadlec
Rudolf
Davidson
Josh
Waugh
Kevin
Bard
Nolan
Timbers
Finbarr
Lanctot
Marc
Holland
Zach
Davoodi
Elnaz
Christianson
Alden
Bowling
Michael
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Computer Science - Computer Science and Game Theory
Player of Games
Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce Player of Games, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play learning, and game-theoretic reasoning. Player of Games is the first algorithm to achieve strong empirical performance in large perfect and imperfect information games -- an important step towards truly general algorithms for arbitrary environments. We prove that Player of Games is sound, converging to perfect play as available computation time and approximation capacity increases. Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold'em poker (Slumbot), and defeats the state-of-the-art agent in Scotland Yard, an imperfect information game that illustrates the value of guided search, learning, and game-theoretic reasoning.
2021-12-06
arXiv.org
http://arxiv.org/abs/2112.03178
2021-12-12 07:05:28
arXiv: 2112.03178
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2112.03178.pdf
2021-12-12 07:05:42
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2112.03178
2021-12-12 07:05:47
1
text/html
journalArticle
Silver
David
Hubert
Thomas
Schrittwieser
Julian
Antonoglou
Ioannis
Lai
Matthew
Guez
Arthur
Lanctot
Marc
Sifre
Laurent
Kumaran
Dharshan
Graepel
Thore
Lillicrap
Timothy
Simonyan
Karen
Hassabis
Demis
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
2018-12-07
en
DOI.org (Crossref)
https://www.science.org/doi/10.1126/science.aar6404
2021-12-12 07:05:58
1140-1144
362
Science
DOI 10.1126/science.aar6404
6419
Science
ISSN 0036-8075, 1095-9203
attachment
Submitted Version
https://discovery.ucl.ac.uk/id/eprint/10069050/1/alphazero_preprint.pdf
2021-12-12 07:06:12
1
application/pdf
thesis
Fort Worth, Texas
Texas Christian University
Nagel, Lauren
Analysis of 'The Settlers of Catan' Using Markov Chains
Markov chains are stochastic models characterized by the probability of future states depending solely on one's current state. Google's page ranking system, financial phenomena such as stock market crashes, and algorithms to predict a company's projected sales are a glimpse into the array of applications for Markov models. Board games such as Monopoly and Risk have also been studied under the lens of Markov decision processes. In this research, we analyzed the board game "The Settlers of Catan" using transition matrices. Transition matrices are composed of the current states which represent each row i and the proceeding states across the columns j with the entry (i,j) containing the probability the current state i will transition to the state j. Using these transition matrices, we delved into addressing the question of which starting positions are optimal. Furthermore, we worked on determining optimality in conjunction with a player's gameplay strategy. After building a simulation of the game in python, we tested the results of our theoretical research against the mock run throughs to observe how well our model prevailed under the limitations of time (number of turns before winner is reached).
May 3, 2021
en
https://repository.tcu.edu/handle/116099117/49062
53
attachment
Full Text
https://repository.tcu.edu/bitstream/116099117/49062/1/Nagel__Lauren-Honors_Project.pdf
2021-12-19 11:15:58
1
application/pdf
attachment
Nagel__Lauren-Honors_Project.pdf
https://repository.tcu.edu/bitstream/handle/116099117/49062/Nagel__Lauren-Honors_Project.pdf?sequence=1&isAllowed=y
2021-12-19 11:15:50
3
journalArticle
arXiv:2009.00655 [cs]
Ward
Henry N.
Brooks
Daniel J.
Troha
Dan
Mills
Bobby
Khakhalin
Arseny S.
Computer Science - Artificial Intelligence
AI solutions for drafting in Magic: the Gathering
Drafting in Magic the Gathering is a sub-game within a larger trading card game, where several players progressively build decks by picking cards from a common pool. Drafting poses an interesting problem for game and AI research due to its large search space, mechanical complexity, multiplayer nature, and hidden information. Despite this, drafting remains understudied, in part due to a lack of high-quality, public datasets. To rectify this problem, we present a dataset of over 100,000 simulated, anonymized human drafts collected from Draftsim.com. We also propose four diverse strategies for drafting agents, including a primitive heuristic agent, an expert-tuned complex heuristic agent, a Naive Bayes agent, and a deep neural network agent. We benchmark their ability to emulate human drafting, and show that the deep neural network agent outperforms other agents, while the Naive Bayes and expert-tuned agents outperform simple heuristics. We analyze the accuracy of AI agents across the timeline of a draft, and describe unique strengths and weaknesses for each approach. This work helps to identify next steps in the creation of humanlike drafting agents, and can serve as a benchmark for the next generation of drafting bots.
2021-04-04
AI solutions for drafting in Magic
arXiv.org
http://arxiv.org/abs/2009.00655
2021-12-19 11:19:03
arXiv: 2009.00655
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2009.00655.pdf
2021-12-19 11:19:09
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2009.00655
2021-12-19 11:19:13
1
text/html
journalArticle
arXiv:1404.0743 [cs]
Irving
Geoffrey
Computer Science - Distributed, Parallel, and Cluster Computing
Pentago is a First Player Win: Strongly Solving a Game Using Parallel In-Core Retrograde Analysis
We present a strong solution of the board game pentago, computed using exhaustive parallel retrograde analysis in 4 hours on 98304 ($3 \times 2^{15}$) threads of NERSC's Cray Edison. At $3.0 \times 10^{15}$ states, pentago is the largest divergent game solved to date by two orders of magnitude, and the only example of a nontrivial divergent game solved using retrograde analysis. Unlike previous retrograde analyses, our computation was performed entirely in-core, writing only a small portion of the results to disk; an out-of-core implementation would have been much slower. Symmetry was used to reduce branching factor and exploit instruction level parallelism. Despite a theoretically embarrassingly parallel structure, asynchronous message passing was required to fit the computation into available RAM, causing latency problems on an older Cray machine. All code and data for the project are open source, together with a website which combines database lookup and on-the-fly computation to interactively explore the strong solution.
2014-04-03
Pentago is a First Player Win
arXiv.org
http://arxiv.org/abs/1404.0743
2021-12-19 11:20:46
arXiv: 1404.0743
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1404.0743.pdf
2021-12-19 11:20:58
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1404.0743
2021-12-19 11:21:03
1
text/html
attachment
Source Code
https://github.com/girving/pentago
2021-12-19 11:21:48
3
computerProgram
A massively parallel pentago solver
https://github.com/girving/pentago
computerProgram
An interactive explorer for perfect pentago play
https://perfect-pentago.net/
journalArticle
arXiv:1811.00673 [stat]
Gilbert
Daniel E.
Wells
Martin T.
Statistics - Applications
Ludometrics: Luck, and How to Measure It
Game theory is the study of tractable games which may be used to model more complex systems. Board games, video games and sports, however, are intractable by design, so "ludological" theories about these games as complex phenomena should be grounded in empiricism. A first "ludometric" concern is the empirical measurement of the amount of luck in various games. We argue against a narrow view of luck which includes only factors outside any player's control, and advocate for a holistic definition of luck as complementary to the variation in effective skill within a population of players. We introduce two metrics for luck in a game for a given population - one information theoretical, and one Bayesian, and discuss the estimation of these metrics using sparse, high-dimensional regression techniques. Finally, we apply these techniques to compare the amount of luck between various professional sports, between Chess and Go, and between two hobby board games: Race for the Galaxy and Seasons.
2018-11-01
Ludometrics
arXiv.org
http://arxiv.org/abs/1811.00673
2021-12-19 11:25:28
arXiv: 1811.00673
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1811.00673.pdf
2021-12-19 11:25:31
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1811.00673
2021-12-19 11:25:35
1
text/html
journalArticle
arXiv:2102.10540 [cs]
Perez
Luis
Computer Science - Artificial Intelligence
Computer Science - Multiagent Systems
Computer Science - Computer Science and Game Theory
Mastering Terra Mystica: Applying Self-Play to Multi-agent Cooperative Board Games
In this paper, we explore and compare multiple algorithms for solving the complex strategy game of Terra Mystica, hereafter abbreviated as TM. Previous work in the area of super-human game-play using AI has proven effective, with recent break-through for generic algorithms in games such as Go, Chess, and Shogi \cite{AlphaZero}. We directly apply these breakthroughs to a novel state-representation of TM with the goal of creating an AI that will rival human players. Specifically, we present the initial results of applying AlphaZero to this state-representation and analyze the strategies developed. A brief analysis is presented. We call this modified algorithm with our novel state-representation AlphaTM. In the end, we discuss the success and shortcomings of this method by comparing against multiple baselines and typical human scores. All code used for this paper is available at on \href{https://github.com/kandluis/terrazero}{GitHub}.
2021-02-21
Mastering Terra Mystica
arXiv.org
http://arxiv.org/abs/2102.10540
2021-12-19 11:25:55
arXiv: 2102.10540
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2102.10540.pdf
2021-12-19 11:26:10
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2102.10540
2021-12-19 11:26:14
1
text/html
attachment
Dataset
https://www.kaggle.com/lemonkoala/terra-mystica
2021-12-19 11:27:41
3
attachment
Source Code
https://github.com/kandluis/terrazero
2021-12-19 11:29:03
3
computerProgram
TM AI: Play TM against AI players.
https://lodev.org/tmai/
journalArticle
arXiv:1710.05121 [cs]
Bosboom
Jeffrey
Hoffmann
Michael
Computer Science - Computational Complexity
F.1.3
Netrunner Mate-in-1 or -2 is Weakly NP-Hard
We prove that deciding whether the Runner can win this turn (mate-in-1) in the Netrunner card game generalized to allow decks to contain an arbitrary number of copies of a card is weakly NP-hard. We also prove that deciding whether the Corp can win within two turns (mate-in-2) in this generalized Netrunner is weakly NP-hard.
2017-10-13
arXiv.org
http://arxiv.org/abs/1710.05121
2021-12-19 11:33:02
arXiv: 1710.05121
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1710.05121.pdf
2021-12-19 11:33:05
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1710.05121
2021-12-19 11:33:09
1
text/html
journalArticle
arXiv:1904.10656 [cs]
Fontaine
Matthew C.
Lee
Scott
Soros
L. B.
Silva
Fernando De Mesentier
Togelius
Julian
Hoover
Amy K.
Computer Science - Neural and Evolutionary Computing
Mapping Hearthstone Deck Spaces through MAP-Elites with Sliding Boundaries
Quality diversity (QD) algorithms such as MAP-Elites have emerged as a powerful alternative to traditional single-objective optimization methods. They were initially applied to evolutionary robotics problems such as locomotion and maze navigation, but have yet to see widespread application. We argue that these algorithms are perfectly suited to the rich domain of video games, which contains many relevant problems with a multitude of successful strategies and often also multiple dimensions along which solutions can vary. This paper introduces a novel modification of the MAP-Elites algorithm called MAP-Elites with Sliding Boundaries (MESB) and applies it to the design and rebalancing of Hearthstone, a popular collectible card game chosen for its number of multidimensional behavior features relevant to particular styles of play. To avoid overpopulating cells with conflated behaviors, MESB slides the boundaries of cells based on the distribution of evolved individuals. Experiments in this paper demonstrate the performance of MESB in Hearthstone. Results suggest MESB finds diverse ways of playing the game well along the selected behavioral dimensions. Further analysis of the evolved strategies reveals common patterns that recur across behavioral dimensions and explores how MESB can help rebalance the game.
2019-04-24
arXiv.org
http://arxiv.org/abs/1904.10656
2021-12-19 11:33:35
arXiv: 1904.10656
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1904.10656.pdf
2021-12-19 11:33:53
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1904.10656
2021-12-19 11:33:57
1
text/html
journalArticle
arXiv:2112.09697 [cs]
Galván
Edgar
Simpson
Gavin
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Computer Science - Neural and Evolutionary Computing
On the Evolution of the MCTS Upper Confidence Bounds for Trees by Means of Evolutionary Algorithms in the Game of Carcassonne
Monte Carlo Tree Search (MCTS) is a sampling best-first method to search for optimal decisions. The MCTS's popularity is based on its extraordinary results in the challenging two-player based game Go, a game considered much harder than Chess and that until very recently was considered infeasible for Artificial Intelligence methods. The success of MCTS depends heavily on how the tree is built and the selection process plays a fundamental role in this. One particular selection mechanism that has proved to be reliable is based on the Upper Confidence Bounds for Trees, commonly referred as UCT. The UCT attempts to nicely balance exploration and exploitation by considering the values stored in the statistical tree of the MCTS. However, some tuning of the MCTS UCT is necessary for this to work well. In this work, we use Evolutionary Algorithms (EAs) to evolve mathematical expressions with the goal to substitute the UCT mathematical expression. We compare our proposed approach, called Evolution Strategy in MCTS (ES-MCTS) against five variants of the MCTS UCT, three variants of the star-minimax family of algorithms as well as a random controller in the Game of Carcassonne. We also use a variant of our proposed EA-based controller, dubbed ES partially integrated in MCTS. We show how the ES-MCTS controller, is able to outperform all these 10 intelligent controllers, including robust MCTS UCT controllers.
2021-12-17
arXiv.org
http://arxiv.org/abs/2112.09697
2021-12-25 07:03:23
arXiv: 2112.09697
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2112.09697.pdf
2021-12-25 07:03:26
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2112.09697
2021-12-25 07:03:32
1
text/html
conferencePaper
Brandon Cui
Hengyuan Hu
Luis Pineda
Jakob Foerster
K-level Reasoning for Zero-Shot Coordination in Hanabi
The standard problem setting in cooperative multi-agent settings is \emph{self-play} (SP), where the goal is to train a \emph{team} of agents that works well together. However, optimal SP policies commonly contain arbitrary conventions (``handshakes'') and are not compatible with other, independently trained agents or humans. This latter desiderata was recently formalized by \cite{Hu2020-OtherPlay} as the \emph{zero-shot coordination} (ZSC) setting and partially addressed with their \emph{Other-Play} (OP) algorithm, which showed improved ZSC and human-AI performance in the card game Hanabi. OP assumes access to the symmetries of the environment and prevents agents from breaking these in a mutually \emph{incompatible} way during training. However, as the authors point out, discovering symmetries for a given environment is a computationally hard problem. Instead, we show that through a simple adaption of k-level reasoning (KLR) \cite{Costa-Gomes2006-K-level}, synchronously training all levels, we can obtain competitive ZSC and ad-hoc teamplay performance in Hanabi, including when paired with a human-like proxy bot. We also introduce a new method, synchronous-k-level reasoning with a best response (SyKLRBR), which further improves performance on our synchronous KLR by co-training a best response.
https://papers.neurips.cc/paper/2021/hash/4547dff5fd7604f18c8ee32cf3da41d7-Abstract.html
Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)
attachment
Paper
https://papers.neurips.cc/paper/2021/file/4547dff5fd7604f18c8ee32cf3da41d7-Paper.pdf
2022-01-11 07:52:40
3
attachment
Supplemental
https://papers.neurips.cc/paper/2021/file/4547dff5fd7604f18c8ee32cf3da41d7-Supplemental.pdf
2022-01-11 07:52:49
3
journalArticle
Ford
Cassandra
Ohata
Merrick
Game Balancing in Dominion: An Approach to Identifying Problematic Game Elements
In the popular card game Dominion, the configuration of game elements greatly affects the experience for players. If one were redesigning Dominion, therefore, it may be useful to identify game elements that reduce the number of viable strategies in any given game configuration - i.e. elements that are unbalanced. In this paper, we propose an approach that assigns credit to the outcome of an episode to individual elements. Our approach uses statistical analysis to learn the interactions and dependencies between game elements. This learned knowledge is used to recommend elements to game designers for further consideration. Designers may then choose to modify the recommended elements with the goal of increasing the number of viable strategies.
en
https://web.archive.org/web/20220516093249/http://cs.gettysburg.edu/~tneller/games/aiagd/papers/EAAI-00039-FordC.pdf
Zotero
http://cs.gettysburg.edu/~tneller/games/aiagd/papers/EAAI-00039-FordC.pdf
7
attachment
Ford and Ohata - Game Balancing in Dominion An Approach to Identif.pdf
http://cs.gettysburg.edu/~tneller/games/aiagd/papers/EAAI-00039-FordC.pdf
2022-03-12 09:44:51
1
application/pdf
journalArticle
arXiv:2203.11656 [cs]
Grooten
Bram
Wemmenhove
Jelle
Poot
Maurice
Portegies
Jim
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Computer Science - Multiagent Systems
Is Vanilla Policy Gradient Overlooked? Analyzing Deep Reinforcement Learning for Hanabi
In pursuit of enhanced multi-agent collaboration, we analyze several on-policy deep reinforcement learning algorithms in the recently published Hanabi benchmark. Our research suggests a perhaps counter-intuitive finding, where Proximal Policy Optimization (PPO) is outperformed by Vanilla Policy Gradient over multiple random seeds in a simplified environment of the multi-agent cooperative card game. In our analysis of this behavior we look into Hanabi-specific metrics and hypothesize a reason for PPO's plateau. In addition, we provide proofs for the maximum length of a perfect game (71 turns) and any game (89 turns). Our code can be found at: https://github.com/bramgrooten/DeepRL-for-Hanabi
2022-03-22
Is Vanilla Policy Gradient Overlooked?
arXiv.org
http://arxiv.org/abs/2203.11656
2022-03-26 04:22:52
arXiv: 2203.11656
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2203.11656.pdf
2022-03-26 04:24:09
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2203.11656
2022-03-26 04:24:17
1
text/html
attachment
Full Text
https://arxiv.org/pdf/2203.11656.pdf
2022-03-26 04:24:21
1
application/pdf
blogPost
Henry Charlesworth
Learning to Play Settlers of Catan with Deep Reinforcement Learning
https://settlers-rl.github.io/
journalArticle
Nguyen
Viet-Ha
Perrot
Kévin
Rikudo is NP-complete
04/2022
en
DOI.org (Crossref)
https://linkinghub.elsevier.com/retrieve/pii/S0304397522000457
2022-04-19 07:12:59
34-47
910
Theoretical Computer Science
DOI 10.1016/j.tcs.2022.01.034
Theoretical Computer Science
ISSN 03043975
conferencePaper
ISBN 978-1-4503-9143-6
Proceedings of the 2022 ACM/SPEC on International Conference on Performance Engineering
DOI 10.1145/3489525.3511685
Beijing China
ACM
de Goede
Danilo
Kampert
Duncan
Varbanescu
Ana Lucia
The Cost of Reinforcement Learning for Game Engines: The AZ-Hive Case-study
2022-04-09
en
The Cost of Reinforcement Learning for Game Engines
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3489525.3511685
2022-04-19 07:14:12
145-152
ICPE '22: ACM/SPEC International Conference on Performance Engineering
attachment
Full Text
https://dl.acm.org/doi/pdf/10.1145/3489525.3511685
2022-04-19 07:14:15
1
application/pdf
journalArticle
IEEE Transactions on Games
DOI 10.1109/TG.2022.3169168
IEEE Trans. Games
ISSN 2475-1502, 2475-1510
Canaan
Rodrigo
Gao
Xianbo
Togelius
Julian
Nealen
Andy
Menzel
Stefan
Generating and Adapting to Diverse Ad-Hoc Partners in Hanabi
2022
DOI.org (Crossref)
https://ieeexplore.ieee.org/document/9762901/
2022-04-30 05:11:33
1-1
thesis
Rijksuniversiteit Groningen
Nicholas Kees Dupuis
Theory of Mind for Multi-agent Coordination in Hanabi
In order to successfully coordinate in complex multi-agent environments, AI systems need the ability to build useful models of others. Building such models often benefits from the use of theory of mind, by representing unobservable mental states of another agent, including their desires, beliefs, and intentions. In this paper I will show how theory of mind affects the ability of agents to coordinate in the cooperative card game Hanabi. The ability to play Hanabi well with a wide range of partners requires reasoning about the beliefs and intentions of other players, which makes Hanabi a perfect testbed for studying theory of mind. I will use both symbolic agent-based models designed to play a simplified version of the game which explicitly engage in theory of mind as well as reinforcement learning agents which use meta-learning to play the full version of the game. Both methods were used to build models of other agents and thereby test how theory of mind can both promote coordination as well as lead to coordination failure. My research demonstrates that the effect of theory of mind is highly variable, and depends heavily on the type of theory of mind reasoning being done by the partner. The empirical results of the agent-based models suggest that theory of mind is best applied when the joint policy produced without theory of mind is far from optimal, in which case second-order theory of mind appears to offer the most significant advantage.
16 Aug 2022
en-US
http://fse.studenttheses.ub.rug.nl/id/eprint/28327
63
Thesis (Master's Thesis / Essay)
attachment
Full Text PDF
https://fse.studenttheses.ub.rug.nl/28327/1/mAI_2022_DupuisNK.pdf
2022-08-23 11:54:35
3
thesis
Örebro University, School of Science and Technology.
Inferadi, Salam
Olof, Johnsson
The Hanabi challenge: From Artificial Teams to Mixed Human-Machine Teams
This report will describe the further development of the Graphical User Interface (GUI) for the Hanabi Benchmark. Hanabi is a card game that has been introduced as a new frontier for artificial intelligence (AI). The goal of the project was to implement a human-user, into the GUI, and give the possibility to play against Machine Learning (ML) based agents, viz, non-human players in the GUI.To achieve these goals, we implemented human controls into the GUI to give a human user the option to play the game in the GUI. Agent models were integrated into to the GUI for the human to play with. Finally, a small study was conducted to evaluate the agent’s performances.
en
http://oru.diva-portal.org/smash/record.jsf?pid=diva2%3A1691114&dswid=-1981
40
Independent thesis Basic level (degree of Bachelor)
attachment
Fulltext PDF
https://www.diva-portal.org/smash/get/diva2:1691114/FULLTEXT01.pdf
2022-09-07 10:06:48
3
thesis
Örebro University, School of Science and Technology
Nguyen, Van Hoa
A Graphical User Interface For The Hanabi Challenge Benchmark
This report will describe the development of the Graphical User Interface (GUI) forthe Hanabi Challenge Benchmark. The benchmark is based on the popular cardgame Hanabi and presents itself as a new research frontier in artificial intelligencefor cooperative multi-agent challenges. The project’s intentions and goals are tointerpret and visualize the data output from the benchmark to give us a better understandingof it.A GUI was then developed by using knowledge within theory of mind in combinationwith theories within human-computer interaction. The results of this project wereevaluated through a small-scale usability test. Users of different ages, gender andlevels of computer knowledge tested the application and through a questionnaire,the quality of the GUI was assessed.
http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-94615
33
Independent thesis Basic level (professional degree)
attachment
PDF
https://www.diva-portal.org/smash/get/diva2:1597503/FULLTEXT01.pdf
2022-09-07 10:09:15
3
preprint
arXiv
Galván
Edgar
Simpson
Gavin
Ameneyro
Fred Valdez
Computer Science - Neural and Evolutionary Computing
Evolving the MCTS Upper Confidence Bounds for Trees Using a Semantic-inspired Evolutionary Algorithm in the Game of Carcassonne
Monte Carlo Tree Search (MCTS) is a sampling best-first method to search for optimal decisions. The success of MCTS depends heavily on how the tree is built and the selection process plays a fundamental role in this. One particular selection mechanism that has proved to be reliable is based on the Upper Confidence Bounds for Trees (UCT). The UCT attempts to balance exploration and exploitation by considering the values stored in the statistical tree of the MCTS. However, some tuning of the MCTS UCT is necessary for this to work well. In this work, we use Evolutionary Algorithms (EAs) to evolve mathematical expressions with the goal to substitute the UCT formula and use the evolved expressions in MCTS. More specifically, we evolve expressions by means of our proposed Semantic-inspired Evolutionary Algorithm in MCTS approach (SIEA-MCTS). This is inspired by semantics in Genetic Programming (GP), where the use of fitness cases is seen as a requirement to be adopted in GP. Fitness cases are normally used to determine the fitness of individuals and can be used to compute the semantic similarity (or dissimilarity) of individuals. However, fitness cases are not available in MCTS. We extend this notion by using multiple reward values from MCTS that allow us to determine both the fitness of an individual and its semantics. By doing so, we show how SIEA-MCTS is able to successfully evolve mathematical expressions that yield better or competitive results compared to UCT without the need of tuning these evolved expressions. We compare the performance of the proposed SIEA-MCTS against MCTS algorithms, MCTS Rapid Action Value Estimation algorithms, three variants of the *-minimax family of algorithms, a random controller and two more EA approaches. We consistently show how SIEA-MCTS outperforms most of these intelligent controllers in the challenging game of Carcassonne.
2022-08-29
arXiv.org
http://arxiv.org/abs/2208.13589
2022-10-09 09:56:30
arXiv:2208.13589 [cs]
arXiv:2208.13589
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2208.13589.pdf
2022-10-09 09:56:41
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2208.13589
2022-10-09 09:56:47
1
text/html
blogPost
William Zhang
Fan Pu Zeng
Analysis of Symmetry and Conventions in Off-Belief Learning (OBL) in Hanabi
We investigate if policies learnt by agents using the Off-Belief Learning (OBL) algorithm in the multi-player cooperative game Hanabi in the zero-shot coordination (ZSC) context are invariant across symmetries of the game, and if any conventions formed during training are arbitrary or natural. We do this by a convention analysis on the action matrix of what the agent does, introduce a novel technique called the Intervention Analysis to estimate if the actions taken by the policies learnt are equivalent between isomorphisms of the same game state, and finally evaluate if our observed results also hold in a simplified version of Hanabi which we call Mini-Hanabi.
https://fanpu.io/blog/2022/symmetry-and-conventions-in-obl-hanabi/
attachment
Analysis_Of_Symmetry_And_Conventions_In_Off_Belief_Learning_In_Hanabi.pdf
https://fanpu.io/assets/research/Analysis_Of_Symmetry_And_Conventions_In_Off_Belief_Learning_In_Hanabi.pdf
2023-03-01 11:40:51
3
thesis
Computer Science Department School of Computer Science, Carnegie Mellon University
Arnav Mahajan
Using intuitive behavior models to adapt to and work with human teammates in Hanabi
An agent that can rapidly and accurately model its teammate is a powerful tool in the field of Collaborative AI. Furthermore, if an approximation for this goal was possible in the field of Human-AI Collaboration, teams of people and machines could be more efficient and effective immediately after starting to work together. Using the cooperative card game Hanabi as a testbed, we developed the Chief agent, which models teammates using a pool of intuitive behavioral models. To achieve the goal of rapid learning, it uses Bayesian inference to quickly evaluate the different models relative to each other. To generate an accurate model, it uses historical data augmented by up-to-date knowledge and sampling methods to handle environmental noise and unknowns. We demonstrate that the Chief's mechanisms for modeling and understanding the teammate show promise, but the overall performance still can use improvement to reliably outperform a solution which skips inferring a best strategy and assumes all strategies in the pool are equally likely for the teammate.
en
http://reports-archive.adm.cs.cmu.edu/anon/anon/usr0/ftp/usr/ftp/2022/abstracts/22-119.html
43
M.S. Thesis
attachment
CMU-CS-22-119.pdf
http://reports-archive.adm.cs.cmu.edu/anon/2022/CMU-CS-22-119.pdf
2023-03-11 03:49:56
3
preprint
arXiv
Jeon
Hyeonchang
Kim
Kyung-Joong
Computer Science - Artificial Intelligence
Behavioral Differences is the Key of Ad-hoc Team Cooperation in Multiplayer Games Hanabi
Ad-hoc team cooperation is the problem of cooperating with other players that have not been seen in the learning process. Recently, this problem has been considered in the context of Hanabi, which requires cooperation without explicit communication with the other players. While in self-play strategies cooperating on reinforcement learning (RL) process has shown success, there is the problem of failing to cooperate with other unseen agents after the initial learning is completed. In this paper, we categorize the results of ad-hoc team cooperation into Failure, Success, and Synergy and analyze the associated failures. First, we confirm that agents learning via RL converge to one strategy each, but not necessarily the same strategy and that these agents can deploy different strategies even though they utilize the same hyperparameters. Second, we confirm that the larger the behavioral difference, the more pronounced the failure of ad-hoc team cooperation, as demonstrated using hierarchical clustering and Pearson correlation. We confirm that such agents are grouped into distinctly different groups through hierarchical clustering, such that the correlation between behavioral differences and ad-hoc team performance is -0.978. Our results improve understanding of key factors to form successful ad-hoc team cooperation in multi-player games.
2023-03-12
arXiv.org
http://arxiv.org/abs/2303.06775
2023-03-16 06:56:29
arXiv:2303.06775 [cs]
arXiv:2303.06775
attachment
2303.06775.pdf
https://arxiv.org/pdf/2303.06775.pdf
2023-03-16 06:56:59
3
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2303.06775.pdf
2023-03-16 06:57:01
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2303.06775
2023-03-16 06:57:08
1
text/html
thesis
Technical University of Crete, Greece School of Electronic and Computer Engineering
Sofia Maria Nikolakaki
Algorithm Modeling for Hardware Implementation of a Blokus Duo Player
we applied
the algorithm on the game of Blokus Duo, a relatively new game, open for research since
it meets the requirements of MCTS and appears to be demanding. More specifically, we
present four competitive Blokus Duo players and show that the ones, based on Monte
Carlo simulations outperform the Minimax-based one. To the best of our knowledge,
this is the first work that compares for the same game, software-based Minimax, Monte
Carlo and MCTS players. For each of these players we suggest opportunities for hard-
ware implementation and discuss potential bottlenecks. Furthermore, we apply certain
heuristics on our MCTS-based player to understand how they affect the efficiency of the
algorithm specifically for the game of Blokus Duo.
http://artemis.library.tuc.gr/DT2014-0060/DT2014-0060.pdf
109
attachment
DT2014-0060.pdf
http://artemis.library.tuc.gr/DT2014-0060/DT2014-0060.pdf
2023-04-17 05:47:30
3
journalArticle
6
Journal Information System Development (ISD)
2
Hong Liang Cai
Sebastian Aldi
Winston Renatan
Artificial Intelligence for Blokus Classic using Heuristics, FloodFill, and Greedy Algorithm
Blokus is an abstract strategy game which has complex variable to determine ones move. We propose an idea for Artificial Intelligence in Blokus using Heuristics, FloodFill, and Greedy algorithm, which will be called as LeakyAI. Both the game and its implementation are established in Java language. To test the result, we performed benchmarking towards Brute Force AI, itself, and the developers.
2 July 2021
Indonesian
https://ejournal-medan.uph.edu/index.php/ISD/article/view/433
9
attachment
263
https://ejournal-medan.uph.edu/index.php/isd/article/view/433/263
2023-04-17 05:55:48
3
conferencePaper
ISBN 978-1-4799-6245-7
2014 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2014.7082825
Shanghai, China
IEEE
Kojima
Akira
FPGA implementation of Blokus Duo player using hardware/software co-design
12/2014
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7082825/
2023-04-17 05:56:43
378-381
2014 International Conference on Field-Programmable Technology (FPT)
journalArticle
Austin
Jathan
Curl
Emelie
Exploring Combinatorics and Graph Theory with Simple Blokus
2022-08-08
en
DOI.org (Crossref)
https://www.tandfonline.com/doi/full/10.1080/07468342.2022.2100147
2023-04-17 05:57:05
273-281
53
The College Mathematics Journal
DOI 10.1080/07468342.2022.2100147
4
The College Mathematics Journal
ISSN 0746-8342, 1931-1346
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718428
Kyoto, Japan
IEEE
Olivito
Javier
Gonzalez
Carlos
Resano
Javier
An FPGA-based specific processor for Blokus Duo
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718428/
2023-04-17 05:57:28
502-505
2013 International Conference on Field-Programmable Technology (FPT)
report
San Luis Obispo
California Polytechnic State University
Chin Chao
Blokus Game Solver
Blokus (officially pronounced as “Block us”) is an abstract strategy board game with transparent Tetris-shaped, color pieces that players are trying to place onto the board. However, the players can only place a piece that touches at least one corner of their own pieces on the board. The ultimate goal of the game is to place as many pieces onto the board as a player can while blocking off the opponent’s ability to place more pieces onto the board. Each player has pieces with different shapes and sizes that can be placed onto the board, where each block within a piece counts as one point. The player that scores the highest wins the game.
Just like other strategy board game such as chess, Blokus contains definite strategic patterns that can be solved with computer algorithms. Various algorithms were discovered and created to develop winning strategy and AI against human opponents. In this work, I am developing random and different greedy strategies to analyze the effectiveness of different factors such as pieces’ size, corner availability, and first-player turn.
December 2018
https://digitalcommons.calpoly.edu/cpesp/290/
attachment
viewcontent.cgi
https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1305&context=cpesp
2023-04-17 05:59:48
3
conferencePaper
ISBN 978-1-4799-6245-7
2014 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2014.7082823
Shanghai, China
IEEE
Qasemi
Ehsan
Samadi
Amir
Shadmehr
Mohammad H.
Azizian
Bardia
Mozaffari
Sajjad
Shirian
Amir
Alizadeh
Bijan
Highly scalable, shared-memory, Monte-Carlo tree search based Blokus Duo Solver on FPGA
12/2014
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7082823/
2023-04-17 06:00:24
370-373
2014 International Conference on Field-Programmable Technology (FPT)
attachment
Full Text
https://moscow.sci-hub.se/4176/0772802f9865a79a36cf1a5c77267101/qasemi2014.pdf#navpanes=0&view=FitH
2023-04-17 06:00:31
1
application/pdf
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718423
Kyoto, Japan
IEEE
Liu
Chester
Implementation of a highly scalable blokus duo solver on FPGA
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718423/
2023-04-17 06:00:52
482-485
2013 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718429
Kyoto, Japan
IEEE
Kojima
Akira
An implementation of Blokus Duo player on FPGA
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718429/
2023-04-17 06:01:05
506-509
2013 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718425
Kyoto, Japan
IEEE
Altman
Erik
Auerbach
Joshua S.
Bacon
David F.
Baldini
Ioana
Cheng
Perry
Fink
Stephen J.
Rabbah
Rodric M.
The Liquid Metal Blokus Duo Design
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718425/
2023-04-17 06:01:22
490-493
2013 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718427
Kyoto, Japan
IEEE
Sugimoto
Naru
Miyajima
Takaaki
Kuhara
Takuya
Katuta
Yuki
Mitsuichi
Takushi
Amano
Hideharu
Artificial intelligence of Blokus Duo on FPGA using Cyber Work Bench
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718427/
2023-04-17 06:03:04
498-501
2013 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-6245-7
2014 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2014.7082820
Shanghai, China
IEEE
Sugimoto
Naru
Amano
Hideharu
Hardware/software co-design architecture for Blokus Duo solver
12/2014
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7082820/
2023-04-17 06:03:28
358-361
2014 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-2198-0 978-1-4799-2199-7
2013 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2013.6718424
Kyoto, Japan
IEEE
Cai
Jiu Cheng
Lian
Ruolong
Wang
Mengyao
Canis
Andrew
Choi
Jongsok
Fort
Blair
Hart
Eric
Miao
Emily
Zhang
Yanyan
Calagar
Nazanin
Brown
Stephen
Anderson
Jason
From C to Blokus Duo with LegUp high-level synthesis
12/2013
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/6718424/
2023-04-17 06:03:46
486-489
2013 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-6245-7
2014 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2014.7082824
Shanghai, China
IEEE
Mashimo
Susumu
Fukuda
Kansuke
Amagasaki
Motoki
Iida
Masahiro
Kuga
Morihiro
Sueyoshi
Toshinori
Blokus Duo engine on a Zynq
12/2014
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7082824/
2023-04-17 06:04:11
374-377
2014 International Conference on Field-Programmable Technology (FPT)
conferencePaper
ISBN 978-1-4799-6245-7
2014 International Conference on Field-Programmable Technology (FPT)
DOI 10.1109/FPT.2014.7082821
Shanghai, China
IEEE
Borhanifar
Hossein
Zolnouri
Seyed Peyman
Optimize MinMax algorithm to solve Blokus Duo game by HDL
12/2014
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7082821/
2023-04-17 06:05:55
362-365
2014 International Conference on Field-Programmable Technology (FPT)
journalArticle
Ando
Yuki
Ogawa
Masataka
Mizoguchi
Yuya
Kumagai
Kouta
Torng-Der
Miaw
Honda
Shinya
A Case Study of FPGA Blokus Duo Solver by System-Level Design
This paper presents a case study to design a Blokus Duo solver by using our system-level design toolkitnamed SystemBuilder. We start with a modeling of the Blokus nDuo solver by C language and communication APIs which are provided by SystemBuilder. Then, we iteratively verified and tuned the parameters in the solver by running the model on a general computer in order to improve the performance of the solver. Finally, the implementation on FPGA was automatically generated from the model by SystemBuilder. Despite the FPGA implementation, we have never written hardware description language throughout the case study. The case study demonstrates the easiness to design system on FPGA by System-level design tools.
2014-12-03
en
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/2693714.2693725
2023-04-17 06:06:14
57-62
42
ACM SIGARCH Computer Architecture News
DOI 10.1145/2693714.2693725
4
SIGARCH Comput. Archit. News
ISSN 0163-5964
journalArticle
Marchionna
Luca
Pugliese
Giulio
Martini
Mauro
Angarano
Simone
Salvetti
Francesco
Chiaberge
Marcello
Deep Instance Segmentation and Visual Servoing to Play Jenga with a Cost-Effective Robotic System
The game of Jenga is a benchmark used for developing innovative manipulation solutions for complex tasks. Indeed, it encourages the study of novel robotics methods to successfully extract blocks from a tower. A Jenga game involves many traits of complex industrial and surgical manipulation tasks, requiring a multi-step strategy, the combination of visual and tactile data, and the highly precise motion of a robotic arm to perform a single block extraction. In this work, we propose a novel, cost-effective architecture for playing Jenga with e.Do, a 6DOF anthropomorphic manipulator manufactured by Comau, a standard depth camera, and an inexpensive monodirectional force sensor. Our solution focuses on a visual-based control strategy to accurately align the end-effector with the desired block, enabling block extraction by pushing. To this aim, we trained an instance segmentation deep learning model on a synthetic custom dataset to segment each piece of the Jenga tower, allowing for visual tracking of the desired block’s pose during the motion of the manipulator. We integrated the visual-based strategy with a 1D force sensor to detect whether the block could be safely removed by identifying a force threshold value. Our experimentation shows that our low-cost solution allows e.DO to precisely reach removable blocks and perform up to 14 consecutive extractions in a row.
2023-01-09
en
DOI.org (Crossref)
https://www.mdpi.com/1424-8220/23/2/752
2023-04-17 06:12:40
752
23
Sensors
DOI 10.3390/s23020752
2
Sensors
ISSN 1424-8220
attachment
Full Text
https://www.mdpi.com/1424-8220/23/2/752/pdf?version=1673261970
2023-04-17 06:12:46
1
application/pdf
journalArticle
Kelly
Kathryn
Liese
Jeffrey
Let’s Get Rolling! Exact Optimal Solitaire Yahtzee
2022-05-27
en
DOI.org (Crossref)
https://www.tandfonline.com/doi/full/10.1080/0025570X.2022.2055334
2023-04-17 06:14:24
205-219
95
Mathematics Magazine
DOI 10.1080/0025570X.2022.2055334
3
Mathematics Magazine
ISSN 0025-570X, 1930-0980
conferencePaper
ISBN 978-1-4503-8807-8
International Conference on the Foundations of Digital Games
DOI 10.1145/3402942.3409778
Bugibba Malta
ACM
Saraiva
Rommel Dias
Grichshenko
Alexandr
Araújo
Luiz Jonatã Pires de
Amaro Junior
Bonfim
de Carvalho
Guilherme Nepomuceno
Using Ant Colony Optimisation for map generation and improving game balance in the Terra Mystica and Settlers of Catan board games
2020-09-15
en
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3402942.3409778
2023-04-17 06:15:08
1-7
FDG '20: International Conference on the Foundations of Digital Games
bookSection
12145
ISBN 978-3-030-53955-9 978-3-030-53956-6
Advances in Swarm Intelligence
Cham
Springer International Publishing
Tan
Ying
Shi
Yuhui
Tuba
Milan
de Araújo
Luiz Jonatã Pires
Grichshenko
Alexandr
Pinheiro
Rodrigo Lankaites
Saraiva
Rommel D.
Gimaeva
Susanna
Map Generation and Balance in the Terra Mystica Board Game Using Particle Swarm and Local Search
2020
en
DOI.org (Crossref)
http://link.springer.com/10.1007/978-3-030-53956-6_15
2023-04-17 06:15:24
Series Title: Lecture Notes in Computer Science
DOI: 10.1007/978-3-030-53956-6_15
163-175
conferencePaper
ISBN 978-1-4503-8807-8
International Conference on the Foundations of Digital Games
DOI 10.1145/3402942.3409778
Bugibba Malta
ACM
Saraiva
Rommel Dias
Grichshenko
Alexandr
Araújo
Luiz Jonatã Pires de
Amaro Junior
Bonfim
de Carvalho
Guilherme Nepomuceno
Using Ant Colony Optimisation for map generation and improving game balance in the Terra Mystica and Settlers of Catan board games
2020-09-15
en
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3402942.3409778
2023-04-17 06:15:08
1-7
FDG '20: International Conference on the Foundations of Digital Games
report
Greg Heon
Lilli Oetting
An Application of Machine Learning to the Board Game Pentago
We present an application of machine learning to the two-player strategy game Pentago. We
have explored two different models through which to train our program - a feature-based model
and a reinforcement learning model. This paper discusses the relative merits of each, as well
as the obstacles we encountered using each model.
http://cs229.stanford.edu/proj2012/HeonOetting-AnAppliactionOfMachineLearningToTheBoardGamePentago.pdf
attachment
HeonOetting-AnAppliactionOfMachineLearningToTheBoardGamePentago.pdf
http://cs229.stanford.edu/proj2012/HeonOetting-AnAppliactionOfMachineLearningToTheBoardGamePentago.pdf
2023-04-17 06:17:45
3
report
Jimbo, Shuji
Learning finite functions by neural networks : Evaluation of Pentago positions by convolutional neural networks
A convolution neural network (CNN) is a useful tool that approximates a finite function. It is used as a solver for various problems in the real world. In this paper, results of experiments on training variations of a small CNN used for image recognition for evaluating Pentago positions are mainly reported. The author hopes that the results are used in discussion of applicability of deep neural networks to researches in theoretical computer science.
https://repository.kulib.kyoto-u.ac.jp/dspace/handle/2433/251730
attachment
2096-03.pdf
https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/251730/1/2096-03.pdf
2023-04-17 06:20:42
3
journalArticle
Gaina
Raluca D.
Goodman
James
Perez-Liebana
Diego
TAG: Terraforming Mars
Games and Artificial Intelligence (AI) have had a tight relationship for many years. A multitude of games have been used as environments in which AI players can learn to act and interact with others or the game mechanics directly; used as optimisation problems; used as generators of large amounts of data which can be analysed to learn about the game, or about the players; or used as containers of content which can be automatically generated by AI methods. Yet many of these environments have been very simple and limited in scope. We propose here a much more complex environment based on the boardgame Terraforming Mars, implemented as part of the Tabletop Games Framework: a very large and dynamic action space, hidden information, large amounts of content, resource management and high variability make this problem domain stand out in the current landscape and a very interesting problem for AI methods of multiple domains. We include results of baseline AI game-players in this game and in-depth analysis of the game itself, together with an exploration of problem complexity, challenges and opportunities.
2021-10-04
TAG
DOI.org (Crossref)
https://ojs.aaai.org/index.php/AIIDE/article/view/18902
2023-04-17 06:27:48
148-155
17
Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
DOI 10.1609/aiide.v17i1.18902
1
AIIDE
ISSN 2334-0924, 2326-909X
attachment
Full Text
https://ojs.aaai.org/index.php/AIIDE/article/download/18902/18667
2023-04-17 06:27:56
1
application/pdf
bookSection
74
ISBN 978-3-319-59393-7 978-3-319-59394-4
Agent and Multi-Agent Systems: Technology and Applications
Cham
Springer International Publishing
Jezic
Gordan
Kusek
Mario
Chen-Burger
Yun-Heh Jessica
Howlett
Robert J.
Jain
Lakhmi C.
Dreżewski
Rafał
Klęczar
Maciej
Artificial Intelligence Techniques for the Puerto Rico Strategy Game
2018
en
DOI.org (Crossref)
http://link.springer.com/10.1007/978-3-319-59394-4_8
2023-04-17 06:31:21
Series Title: Smart Innovation, Systems and Technologies
DOI: 10.1007/978-3-319-59394-4_8
77-87
conferencePaper
Mavromoustakos Blom
Multiplayer Tension In the Wild: A Hearthstone Case
Games are designed to elicit strong emotions during game play, especially when players are competing against each other. Artificial Intelligence applied to predict a player's emotions has mainly been tested on single-player experiences in low-stakes settings and short-term interactions. How do players experience and manifest affect in high-stakes competitions, and which modalities can capture this? This paper reports a first experiment in this line of research, using a competition of the video game Hearthstone where both competing players' game play and facial expressions were recorded over the course of the entire match which could span up to 41 minutes. Using two experts' annotations of tension using a continuous video affect annotation tool, we attempt to predict tension from the webcam footage of the players alone. Treating both the input and the tension output in a relative fashion, our best models reach 66.3% average accuracy (up to 79.2% at the best fold) in the challenging leave-one-participant out cross-validation task. This initial experiment shows a way forward for affect annotation in games {"}in the wild{"} in high-stakes, real-world competitive settings.
2023
https://research.tilburguniversity.edu/en/publications/multiplayer-tension-in-the-wild-a-hearthstone-case
Foundations of Digital Games
attachment
FDG_Modelling_Player_Tension_Through_Facial_Expressions_in_Competitive_Hearthstone_2_.pdf
https://pure.uvt.nl/ws/portalfiles/portal/69103237/FDG_Modelling_Player_Tension_Through_Facial_Expressions_in_Competitive_Hearthstone_2_.pdf
2023-04-18 04:45:05
3
preprint
arXiv
Świechowski
Maciej
Tajmajer
Tomasz
Janusz
Andrzej
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms
We investigate the impact of supervised prediction models on the strength and efficiency of artificial agents that use the Monte-Carlo Tree Search (MCTS) algorithm to play a popular video game Hearthstone: Heroes of Warcraft. We overview our custom implementation of the MCTS that is well-suited for games with partially hidden information and random effects. We also describe experiments which we designed to quantify the performance of our Hearthstone agent's decision making. We show that even simple neural networks can be trained and successfully used for the evaluation of game states. Moreover, we demonstrate that by providing a guidance to the game state search heuristic, it is possible to substantially improve the win rate, and at the same time reduce the required computations.
2018-08-14
arXiv.org
http://arxiv.org/abs/1808.04794
2023-04-18 04:46:43
arXiv:1808.04794 [cs]
arXiv:1808.04794
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1808.04794.pdf
2023-04-18 04:46:45
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1808.04794
2023-04-18 04:46:52
1
text/html
thesis
New Jersey Institute of Technology
Watson, Connor W.
Analysis of gameplay strategies in hearthstone: a data science approach
In recent years, games have been a popular test bed for AI research, and the presence
of Collectible Card Games (CCGs) in that space is still increasing. One such CCG for
both competitive/casual play and AI research is Hearthstone, a two-player adversarial
game where players seeks to implement one of several gameplay strategies to defeat
their opponent and decrease all of their Health points to zero. Although some
open source simulators exist, some of their methodologies for simulated agents create
opponents with a relatively low skill level. Using evolutionary algorithms, this thesis
seeks to evolve agents with a higher skill level than those implemented in one such
simulator, SabberStone. New benchmarks are propsed using supervised learning
techniques to predict gameplay strategies from game data, and using unsupervised
learning techniques to discover and visualize patterns that may be used in player
modeling to differentiate gameplay strategies.
http://archives.njit.edu/vol01/etd/2020s/2020/njit-etd2020-006/njit-etd2020-006.pdf
journalArticle
Eiji Sakurai
Koji Hasabe
Decision-Making in Hearthstone Based on Evolutionary Algorithm
Hearthstone is a two-player turn-based collectible card game with hidden information and randomness. Gen-
erally, the search space for actions of this game grows exponentially because the players must perform a series
of actions by selecting each action from many options in each turn. When playing such a game, it is often
difficult to use a game tree search technique to find the optimal sequence of actions up until the end of a turn.
To solve this problem, we propose a method to determine a series of actions in Hearthstone based on an evolu-
tionary algorithm called the rolling horizon evolutionary algorithm (RHEA). To apply RHEA to this game, we
modify the genetic operators and add techniques for selecting actions based on previous search results and for
filtering (pruning) some of the action options. To evaluate the effectiveness of these improvements, we imple-
mented an agent based on the proposed method and played it against an agent based on the original RHEA for
several decks. The result showed a maximum winning rate of over 97.5%. Further, our agent played against
the top-performing agents in previous competitions and outperformed most of them.
2023
https://www.cs.tsukuba.ac.jp/~hasebe/downloads/icaart2023_sakurai.pdf
conferencePaper
ISBN 978-1-5090-1883-3
2016 IEEE Conference on Computational Intelligence and Games (CIG)
DOI 10.1109/CIG.2016.7860416
Santorini, Greece
IEEE
Bursztein
Elie
I am a legend: Hacking hearthstone using statistical learning methods
9/2016
I am a legend
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7860416/
2023-04-18 04:50:48
1-8
2016 IEEE Conference on Computational Intelligence and Games (CIG)
journalArticle
Hoover
Amy K.
Togelius
Julian
Lee
Scott
de Mesentier Silva
Fernando
The Many AI Challenges of Hearthstone
03/2020
en
DOI.org (Crossref)
http://link.springer.com/10.1007/s13218-019-00615-z
2023-04-18 04:51:07
33-43
34
KI - Künstliche Intelligenz
DOI 10.1007/s13218-019-00615-z
1
Künstl Intell
ISSN 0933-1875, 1610-1987
attachment
Submitted Version
https://arxiv.org/pdf/1907.06562
2023-04-18 04:51:11
1
application/pdf
conferencePaper
ISBN 978-1-5090-1883-3
2016 IEEE Conference on Computational Intelligence and Games (CIG)
DOI 10.1109/CIG.2016.7860426
Santorini, Greece
IEEE
Garcia-Sanchez
Pablo
Tonda
Alberto
Squillero
Giovanni
Mora
Antonio
Merelo
Juan J.
Evolutionary deckbuilding in hearthstone
9/2016
DOI.org (Crossref)
http://ieeexplore.ieee.org/document/7860426/
2023-04-18 04:52:11
1-8
2016 IEEE Conference on Computational Intelligence and Games (CIG)
attachment
Evolutionary-Deckbuilding-in-HearthStone.pdf
https://www.researchgate.net/profile/Alberto-Tonda/publication/304246423_Evolutionary_Deckbuilding_in_HearthStone/links/5a5bc15faca2727d608a25b6/Evolutionary-Deckbuilding-in-HearthStone.pdf
2023-04-18 04:54:19
3
preprint
arXiv
Silva
Fernando de Mesentier
Canaan
Rodrigo
Lee
Scott
Fontaine
Matthew C.
Togelius
Julian
Hoover
Amy K.
Computer Science - Artificial Intelligence
Computer Science - Neural and Evolutionary Computing
Evolving the Hearthstone Meta
Balancing an ever growing strategic game of high complexity, such as Hearthstone is a complex task. The target of making strategies diverse and customizable results in a delicate intricate system. Tuning over 2000 cards to generate the desired outcome without disrupting the existing environment becomes a laborious challenge. In this paper, we discuss the impacts that changes to existing cards can have on strategy in Hearthstone. By analyzing the win rate on match-ups across different decks, being played by different strategies, we propose to compare their performance before and after changes are made to improve or worsen different cards. Then, using an evolutionary algorithm, we search for a combination of changes to the card attributes that cause the decks to approach equal, 50% win rates. We then expand our evolutionary algorithm to a multi-objective solution to search for this result, while making the minimum amount of changes, and as a consequence disruption, to the existing cards. Lastly, we propose and evaluate metrics to serve as heuristics with which to decide which cards to target with balance changes.
2019-07-02
arXiv.org
http://arxiv.org/abs/1907.01623
2023-04-18 04:54:36
arXiv:1907.01623 [cs]
arXiv:1907.01623
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/1907.01623.pdf
2023-04-18 04:54:40
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/1907.01623
2023-04-18 04:54:46
1
text/html
journalArticle
García-Sánchez
Pablo
Tonda
Alberto
Fernández-Leiva
Antonio J.
Cotta
Carlos
Optimizing Hearthstone agents using an evolutionary algorithm
01/2020
en
DOI.org (Crossref)
https://linkinghub.elsevier.com/retrieve/pii/S0950705119304356
2023-04-18 04:55:18
105032
188
Knowledge-Based Systems
DOI 10.1016/j.knosys.2019.105032
Knowledge-Based Systems
ISSN 09507051
attachment
garcia19optimizing.pdf
http://www.lcc.uma.es/~ccottap/papers/garcia19optimizing.pdf
2023-04-18 04:55:24
3
conferencePaper
ISBN 978-1-4503-6571-0
Proceedings of the 13th International Conference on the Foundations of Digital Games
DOI 10.1145/3235765.3235791
Malmö Sweden
ACM
Bhatt
Aditya
Lee
Scott
de Mesentier Silva
Fernando
Watson
Connor W.
Togelius
Julian
Hoover
Amy K.
Exploring the hearthstone deck space
2018-08-07
en
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3235765.3235791
2023-04-18 04:55:33
1-10
FDG '18: Foundations of Digital Games 2018
attachment
Exploring-the-hearthstone-deck-space.pdf
https://www.researchgate.net/profile/Fernando-De-Mesentier-Silva/publication/327637789_Exploring_the_hearthstone_deck_space/links/5c50b295a6fdccd6b5d1e5a2/Exploring-the-hearthstone-deck-space.pdf
2023-04-18 04:55:46
3
thesis
Politecnico di Torino
Stefano Griva
Computational Intelligence Techniques for Games with Incomplete Information
Artificial intelligence is an ever growing field in computer science, with new techniques and algorithms getting developed every day. Our aim is to show how AIs can improve their performances by using hidden information, that would normally require complex human deduction to normally exploit. Modern game AIs often rely on clear and curated data, deterministic information and overall accurate numbers to make their calculations, however there are a lot of games that involve pieces of information that are incomplete or hidden. Incomplete information can be extremely helpful to an AI, but it requires additional care when taken into consideration because it's usually based on statistical analysis and heuristics. Our focus is set on a few innovative computational intelligence techniques that aim at improving the efficiency of hidden information-based AIs, by allowing them to explore non-deterministic scenarios: 1) The Double Inverted Index is an algorithm that can be used in hidden information games, such as card games, to narrow the great possibilities and scenarios to calculate down to a reasonable number. This approach is based on how humans would think in similar situation. 2) The Blunder Threshold is a technique that helps the AI navigating probabilistic scenarios balancing the pros and cons of deeper analysis and uncertain information. We'll explore different parameters and options of the previously mentioned techniques as well as showing their efficacy in practice with a focus on the chosen test game Hearthstone.
en
https://webthesis.biblio.polito.it/26844/
60
Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
attachment
PDF
https://webthesis.biblio.polito.it/secure/26844/1/tesi.pdf
2023-05-06 06:11:46
3
preprint
arXiv
Zhang
Zhujun
Computer Science - Computational Complexity
Perfect Information Hearthstone is PSPACE-hard
We consider the computational complexity of Hearthstone which is a popular online CCG (collectible card game). We reduce a PSPACE-complete problem, the partition game, to perfect information Hearthstone in which there is no hidden information or random elements. In the reduction, each turn in Hearthstone is used to simulate one choice in the partition game. It is proved that determining whether the player has a forced win in perfect information Hearthstone is PSPACE-hard.
2023-05-22
arXiv.org
http://arxiv.org/abs/2305.12731
2023-05-31 06:12:11
arXiv:2305.12731 [cs]
arXiv:2305.12731
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2305.12731.pdf
2023-05-31 06:12:22
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2305.12731
2023-05-31 06:12:28
1
text/html
preprint
arXiv
Kowalski
Jakub
Miernik
Radosław
Computer Science - Artificial Intelligence
Summarizing Strategy Card Game AI Competition
This paper concludes five years of AI competitions based on Legends of Code and Magic (LOCM), a small Collectible Card Game (CCG), designed with the goal of supporting research and algorithm development. The game was used in a number of events, including Community Contests on the CodinGame platform, and Strategy Card Game AI Competition at the IEEE Congress on Evolutionary Computation and IEEE Conference on Games. LOCM has been used in a number of publications related to areas such as game tree search algorithms, neural networks, evaluation functions, and CCG deckbuilding. We present the rules of the game, the history of organized competitions, and a listing of the participant and their approaches, as well as some general advice on organizing AI competitions for the research community. Although the COG 2022 edition was announced to be the last one, the game remains available and can be played using an online leaderboard arena.
2023-05-19
arXiv.org
http://arxiv.org/abs/2305.11814
2023-05-31 06:13:47
arXiv:2305.11814 [cs]
arXiv:2305.11814
attachment
arXiv Fulltext PDF
https://arxiv.org/pdf/2305.11814.pdf
2023-05-31 06:13:56
1
application/pdf
attachment
arXiv.org Snapshot
https://arxiv.org/abs/2305.11814
2023-05-31 06:14:02
1
text/html
conferencePaper
ISBN 978-1-4503-9421-5
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
DOI 10.1145/3544548.3581550
Hamburg Germany
ACM
Sidji
Matthew
Smith
Wally
Rogerson
Melissa J.
The Hidden Rules of Hanabi: How Humans Outperform AI Agents
2023-04-19
en
The Hidden Rules of Hanabi
DOI.org (Crossref)
https://dl.acm.org/doi/10.1145/3544548.3581550
2023-05-31 06:19:29
1-16
CHI '23: CHI Conference on Human Factors in Computing Systems
journalArticle
Vieira
Ronaldo E Silva
Rocha Tavares
Anderson
Chaimowicz
Luiz
Towards sample efficient deep reinforcement learning in collectible card games
7/2023
en
DOI.org (Crossref)
https://linkinghub.elsevier.com/retrieve/pii/S1875952123000496
2023-07-27 12:13:09
100594
Entertainment Computing
DOI 10.1016/j.entcom.2023.100594
Entertainment Computing
ISSN 18759521
2048
Accessibility
Azul
Blokus
Carcassonne
Diplomacy
Dixit
Dominion
Frameworks
Game Design
General Gameplay
Hanabi
Hearthstone
Hive
Jenga
Kingdomino
Lost Cities
Mafia
Magic: The Gathering
Mobile Games
Modern Art: The card game
Monopoly
Monopoly Deal
Netrunner
Nmbr9
Pandemic
Patchwork
Pentago
Puerto Rico
Quixo
Race for the Galaxy
Resistance: Avalon
RISK
Santorini
Scotland Yard
Secret Hitler
Set
Settlers of Catan
Shobu
Terra Mystica
Terraforming Mars
Tetris Link
Ticket to Ride
Ultimate Tic-Tac-Toe
UNO
Yahtzee