As it is the case with any very complex and interactive software, many video games are released with various minor or major issues that can potentially affect the user experience, cause security issues for players, or exploit the companies that deliver the products. In order to test their games, companies invest important resources in quality assurance personnel who usually perform the testing mostly manually. The main goal of our work is to automate various parts of the testing process that involve human users (testers) and thus to reduce costs and run more tests in less time. The secondary goal is to provide mechanisms to make test specification writing easier and more efficient. We focus on solving initial real-world problems that have emerged from several discussions with industry partners. In this paper, we present RiverGame, a tool that allows game developers to automatically test their products from different points of view: the rendered output, the sound played by the game, the animation and movement of the entities, the performance or various statistical analyses. We also address the problem of input priorities, scheduling, and directing the testing effort towards custom and dynamic directions. At the core of our methods, we use state-of-the-art artificial intelligence methods for analysis and a behavior-driven development methodology for test specifications. Our technical solution is open-source, independent of game engine, platform, and programming language.