English  |  正體中文  |  简体中文  |  Items with full text/Total items : 90453/105671 (86%)
Visitors : 15386090      Online Users : 325
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version

    Please use this identifier to cite or link to this item: http://asiair.asia.edu.tw/ir/handle/310904400/2314

    Title: Generation of Adaptive Opponents for a Predator-Prey Game
    Authors: Hao-Min Hsieh
    Contributors: Department of Media and Design
    Keywords: computer-controlled opponent;game artificial intelligence;fuzzy rule;genetic algorithm;dynamic difficulty adjustment;predator-prey game
    Date: 2008
    Issue Date: 2009-11-06 13:08:08 (UTC+8)
    Publisher: Asia University
    Abstract: Computer-controlled opponents in a computer game are often controlled by a lot of scripts. However, writing scripts needs a long period of trial and error and is usually tedious to game developers. Besides, the tactics of computer-controlled opponents are often fixed and limited. The behavioral repetition of computer-controlled opponents reduces human player’s enjoyment during gameplay. Furthermore, in most commercial games, the predefined fixed difficulty levels cannot satisfy players of varied experiences in gameplay. To solve these problems, some researchers proposed offline learning approaches to automatically generating game tactics of computer-controlled opponents, and online learning approaches to realizing dynamic difficulty adjustment. However, most of the offline and the online learning approaches are neural network-based, and their major drawback is the lack of transparency which increases the difficulty of maintenance for game developers. Furthermore, the adaptation efficiency of those online learning techniques is not enough for the games played with human players. In this study we propose both offline and online learning approaches. In offline learning, we apply a genetic algorithm to generate a fuzzy rulebase, which is used as the game tactics during gameplay to guide the computer-controlled opponents. In online learning, we use a probabilistic method to adapt the game tactics to the player. The difficulty level of the game can be adapted according to the player’s preference for challenge. The experimental results show the effectiveness of the offline evolved rulebases and the feasibility of the proposed online learning approach on dynamic difficulty adjustment. The research results will be of help to game developers for creating game AI in their games
    Appears in Collections:[數位媒體設計學系] 博碩士論文

    Files in This Item:

    File SizeFormat

    All items in ASIAIR are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback