Chat with us, powered by LiveChat CSE 5120 Introduction to Artificial Intelligence: Pacman lives in a shiny blue world of twisting corridors and tasty round treats. Navigating this world efficiently will be Pacmans fir | EssayAbode

CSE 5120 Introduction to Artificial Intelligence: Pacman lives in a shiny blue world of twisting corridors and tasty round treats. Navigating this world efficiently will be Pacmans fir

 

This should be worked using Python.  The code will be given and you edit some section and write the report.

There are .zip file and the word files what I attached below. 

.zip file is coded file. You should edit some section following instruction. 

The word file include the instruction for the work and  the form of report. 

Though the word file is 6 pages, the most of content is attached picture and the form of report and well- explained to do easily

You should give me code file as .zip and the report following the form. 

Homework 1–Search algorithms (Pacman)

Module: CSE 5120 Introduction to Artificial Intelligence

Assessment brief: The code and resources provided in this homework

Pacman lives in a shiny blue world of twisting corridors and tasty round treats. Navigating this world efficiently will be Pacman’s first step in mastering his domain.

The code for this project consists of several Python files, some of which you will need to read and understand in order to complete the assignment, and some of which you can ignore. You can download all the code and supporting files as a  zip folder from homework 1 link given on Blackboard.

Your homework is based on two parts as given below:

1. Code implemented for search algorithms in given search.py file (in specific sections as indicated in detail below)

2. A brief report on what you did for each algorithm (i.e., how you implemented with screenshots from autograder script given in the folder)

File Name

Description

search.py

Where all of your search algorithms will reside.

searchAgents.py

Where all of your search-based agents will reside.

pacman.py

The main file that runs Pacman games. This file describes a Pacman GameState type, which you use in this project.

game.py

The logic behind how the Pacman world works. This file describes several supporting types like AgentState, Agent, Direction, and Grid.

util.py

Useful data structures for implementing search algorithms.

After downloading the code, unzipping it, and changing to the directory, you should be able to play a game of Pacman by running the following command.

python pacman.py

pacman.py supports a number of options (e.g. –layout or -l). You can see the list of all options and their default values via python pacman.py -h.

All the commands you will need in this homework can be found in the file commands.txt for easy copying and pasting. You can use Spyder (installed through Anaconda from week 1 Thursday’s lecture) or other IDE for this work.

Files to Edit and Submit: You will need to edit and submit (search.py) and (searchAgents.py only if required) files to implement your algorithms. Once you have completed the homework, you are welcome to run automated tests using an autograder.py given in the folder before you submit them for accuracy. You do not need to submit autograder.py file in your code submission but will need to test your algorithms with autograder.py to copy screenshots in your report. Please do not change the other files in this distribution or submit any of the original files other than these files.

Academic Dishonesty: Your code will be checked against other submissions in the class for logical redundancy. If you copy someone else’s code and submit it with minor changes, they will be detected easily, so please do not try that and submit your own work only. In case of cheating, the University’s academic policies on cheating and dishonesty will strictly apply which may result from the deduction in your grade to expulsion.

Figure 1: Breadth First and Uniform Cost search algorithms – pseudocode

Figure 2: Tree Search algorithm pseudocode

1

Tasks for homework 1

1. Understanding the code base (not graded)

In searchAgents.py, you will find a fully implemented SearchAgent, which plans out a path through Pacman's world and then executes that path step-by-step. The search algorithms for formulating a plan are not implemented: your task is to implement them.

First, test that the SearchAgent is working correctly by running the following command.

python pacman.py -l tinyMaze -p SearchAgent -a fn=tinyMazeSearch

The command above tells the SearchAgent to use tinyMazeSearch as its search algorithm. This algorithm is implemented in search.py. Pacman should navigate the maze successfully.

Now you will need to implement different search algorithms to help Pacman plan its routes and reach its goal. Remember that a search node must contain not only a state but also the information necessary to reconstruct the path (plan) which gets to that state from the start state.

Important note: All of your search functions need to return a list of actions that will lead the agent from the start to the goal. These actions all have to be legal moves (valid directions, no moving through walls).

Important note: Make sure to  use the Stack, Queue and PriorityQueue data structures provided to you in util.py! These data structure implementations have particular properties that are required for compatibility with the autograder.

Hint: The algorithms we covered so far are quite similar. DFS, BFS, UCS, and A* algorithms differ only in the details of how the fringe (or  frontier) is managed. So, concentrate on getting DFS right and the rest should be relatively straightforward. Indeed, one possible implementation requires only a single generic search method which is configured with an algorithm-specific queuing strategy. (Your implementation need not be of this form to receive full credit.)

2. Depth First Search (1%)

Implement the depth-first search (DFS) algorithm in the depthFirstSearch function in search.py.

Your code should be able to solve these tasks quickly.

1. python pacman.py -l tinyMaze -p SearchAgent

2. python pacman.py -l mediumMaze -p SearchAgent

3. python pacman.py -l bigMaze -z .5 -p SearchAgent

Evaluation: Run the following command to test your solution: python autograder.py -q q1. The first 4 test cases are basic test cases. Together they account for 0.8%. If any one of them fails, the fifth test case will not be evaluated. The fifth test case accounts for 0.2%.

3. Breadth First Search (1%)

Implement the breadth-first search (BFS) algorithm in the breadthFirstSearch function in search.py.

Your code should be able to solve these tasks quickly.

1. python pacman.py -l mediumMaze -p SearchAgent -a fn=bfs

2. python pacman.py -l bigMaze -p SearchAgent -a fn=bfs -z .5

Evaluation: Run the following command to test your solution: python autograder.py -q q2. The first 4 test cases are basic test cases. Together they account for 0.8%. If any one of them fails, the fifth test case will not be evaluated. The fifth test case accounts for 0.2%.

4. Uniform Cost Search (1%)

BFS tries to minimize the number of actions taken, but not necessarily the least-cost path. By varying the cost function, the Pacman can be encouraged to explore different paths. By changing the cost function, we can encourage Pacman to find different paths. For example, we can charge more for dangerous steps in ghost-ridden areas or less for steps in food-rich areas.

Implement the uniform-cost search (UCS) algorithm in the uniformCostSearch function in search.py (the agents and the cost functions are implemented for you).

You should now observe successful behavior in all three of the following layouts.

1. python pacman.py -l mediumMaze -p SearchAgent -a fn=ucs

2. python pacman.py -l mediumDottedMaze -p StayEastSearchAgent

3. python pacman.py -l mediumScaryMaze -p StayWestSearchAgent

Evaluation: Run the following command to test your solution: python autograder.py -q q3. The first 4 test cases are basic test cases. Together they account for 0.8%. If any one of them fails, the fifth test case will not be evaluated. The fifth test case accounts for 0.2%.

5. A* Search (2%)

Implement the A* search algorithm in the aStarSearch function in search.py. A* takes a heuristic function as an argument.

You need to test your A* implementation on the original problem of finding a path through a maze to a fixed position using the Manhattan distance heuristic (already implemented).

python pacman.py -l bigMaze -z .5 -p SearchAgent -a fn=astar, heuristic= manhattanHeuristic

Evaluation: Run the following command to test your solution: python autograder.py -q q4. The first 5 test cases are basic test cases. Together they account for 1.5%. If any one of them fails, the fifth test case will not be evaluated. The fifth test case accounts for 0.5%.

Report

Brief description of your work here acknowledging your collaboration with your class fellow (or a friend from other CSE 5120 section), and the capacity at which he/she collaborated with you, followed by the algorithms you implemented.

1. Depth First Search

Your brief explanation (e.g., does DFS expand the shallowest or deepest unexpanded node? did you use Stack, Queue, or PriorityQueue in your code?) with screenshots of your code Evaluation (results from autograder.py)

2. Breadth First Search

Your brief explanation (e.g., does BFS expand the shallowest or deepest unexpanded node? did you use Stack, Queue, or PriorityQueue in your code?) with screenshots of your code Evaluation (results from autograder.py)

3. Uniform Cost Search

Your brief explanation (e.g., does BFS expand the cheapest or closest node to the goal state? What function did you use to expand the cheapest or closest node in this algorithm and at which line?) with screenshots of your code Evaluation (results from autograder.py)

4. Breadth First Search

Your brief explanation (e.g., does A* use g(n) or h(n)? Where in the code are using retrieving the cost of an unexpanded node to plan and which function did you implement/use to get g(n), h(n), f(n) etc?) with screenshots of your code Evaluation (results from autograder.py)

image1.png

image2.emf

,

homework_1_search/autograder.py

# autograder.py # ————- # Licensing Information: You are free to use or extend these projects for # educational purposes provided that (1) you do not distribute or publish # solutions, (2) you retain this notice, and (3) you provide clear # attribution to UC Berkeley, including a link to http://ai.berkeley.edu. # # Attribution Information: The Pacman AI projects were developed at UC Berkeley. # The core projects and autograders were primarily created by John DeNero # ([email protected]) and Dan Klein ([email protected]). # Student side autograding was added by Brad Miller, Nick Hay, and # Pieter Abbeel ([email protected]). # imports from python standard library import grading import imp import optparse import os import re import sys import projectParams import random random.seed(0) try: from pacman import GameState except: pass # register arguments and set default values def readCommand(argv): parser = optparse.OptionParser(description = 'Run public tests on student code') parser.set_defaults(generateSolutions=False, edxOutput=False, gsOutput=False, muteOutput=False, printTestCase=False, noGraphics=False) parser.add_option('–test-directory', dest = 'testRoot', default = 'test_cases', help = 'Root test directory which contains subdirectories corresponding to each question') parser.add_option('–student-code', dest = 'studentCode', default = projectParams.STUDENT_CODE_DEFAULT, help = 'comma separated list of student code files') parser.add_option('–code-directory', dest = 'codeRoot', default = "", help = 'Root directory containing the student and testClass code') parser.add_option('–test-case-code', dest = 'testCaseCode', default = projectParams.PROJECT_TEST_CLASSES, help = 'class containing testClass classes for this project') parser.add_option('–generate-solutions', dest = 'generateSolutions', action = 'store_true', help = 'Write solutions generated to .solution file') parser.add_option('–edx-output', dest = 'edxOutput', action = 'store_true', help = 'Generate edX output files') parser.add_option('–gradescope-output', dest = 'gsOutput', action = 'store_true', help = 'Generate GradeScope output files') parser.add_option('–mute', dest = 'muteOutput', action = 'store_true', help = 'Mute output from executing tests') parser.add_option('–print-tests', '-p', dest = 'printTestCase', action = 'store_true', help = 'Print each test case before running them.') parser.add_option('–test', '-t', dest = 'runTest', default = None, help = 'Run one particular test. Relative to test root.') parser.add_option('–question', '-q', dest = 'gradeQuestion', default = None, help = 'Grade one particular question.') parser.add_option('–no-graphics', dest = 'noGraphics', action = 'store_true', help = 'No graphics display for pacman games.') (options, args) = parser.parse_args(argv) return options # confirm we should author solution files def confirmGenerate(): print('WARNING: this action will overwrite any solution files.') print('Are you sure you want to proceed? (yes/no)') while True: ans = sys.stdin.readline().strip() if ans == 'yes': break elif ans == 'no': sys.exit(0) else: print('please answer either "yes" or "no"') # TODO: Fix this so that it tracebacks work correctly # Looking at source of the traceback module, presuming it works # the same as the intepreters, it uses co_filename. This is, # however, a readonly attribute. def setModuleName(module, filename): functionType = type(confirmGenerate) classType = type(optparse.Option) for i in dir(module): o = getattr(module, i) if hasattr(o, '__file__'): continue if type(o) == functionType: setattr(o, '__file__', filename) elif type(o) == classType: setattr(o, '__file__', filename) # TODO: assign member __file__'s? #print(i, type(o)) #from cStringIO import StringIO def loadModuleString(moduleSource): # Below broken, imp doesn't believe its being passed a file: # ValueError: load_module arg#2 should be a file or None # #f = StringIO(moduleCodeDict[k]) #tmp = imp.load_module(k, f, k, (".py", "r", imp.PY_SOURCE)) tmp = imp.new_module(k) exec(moduleCodeDict[k] in tmp.__dict__) setModuleName(tmp, k) return tmp import py_compile def loadModuleFile(moduleName, filePath): with open(filePath, 'r') as f: return imp.load_module(moduleName, f, "%s.py" % moduleName, (".py", "r", imp.PY_SOURCE)) def readFile(path, root=""): "Read file from disk at specified path and return as string" with open(os.path.join(root, path), 'r') as handle: return handle.read() ####################################################################### # Error Hint Map ####################################################################### # TODO: use these ERROR_HINT_MAP = { 'q1': { "<type 'exceptions.IndexError'>": """ We noticed that your project threw an IndexError on q1. While many things may cause this, it may have been from assuming a certain number of successors from a state space or assuming a certain number of actions available from a given state. Try making your code more general (no hardcoded indices) and submit again! """ }, 'q3': { "<type 'exceptions.AttributeError'>": """ We noticed that your project threw an AttributeError on q3. While many things may cause this, it may have been from assuming a certain size or structure to the state space. For example, if you have a line of code assuming that the state is (x, y) and we run your code on a state space with (x, y, z), this error could be thrown. Try making your code more general and submit again! """ } } import pprint def splitStrings(d): d2 = dict(d) for k in d: if k[0:2] == "__": del d2[k] continue if d2[k].find("n") >= 0: d2[k] = d2[k].split("n") return d2 def printTest(testDict, solutionDict): pp = pprint.PrettyPrinter(indent=4) print("Test case:") for line in testDict["__raw_lines__"]: print(" |", line) print("Solution:") for line in solutionDict["__raw_lines__"]: print(" |", line) def runTest(testName, moduleDict, printTestCase=False, display=None): import testParser import testClasses for module in moduleDict: setattr(sys.modules[__name__], module, moduleDict[module]) testDict = testParser.TestParser(testName + ".test").parse() solutionDict = testParser.TestParser(testName + ".solution").parse() test_out_file = os.path.join('%s.test_output' % testName) testDict['test_out_file'] = test_out_file testClass = getattr(projectTestClasses, testDict['class']) questionClass = getattr(testClasses, 'Question') question = questionClass({'max_points': 0}, display) testCase = testClass(question, testDict) if printTestCase: printTest(testDict, solutionDict) # This is a fragile hack to create a stub grades object grades = grading.Grades(projectParams.PROJECT_NAME, [(None,0)]) testCase.execute(grades, moduleDict, solutionDict) # returns all the tests you need to run in order to run question def getDepends(testParser, testRoot, question): allDeps = [question] questionDict = testParser.TestParser(os.path.join(testRoot, question, 'CONFIG')).parse() if 'depends' in questionDict: depends = questionDict['depends'].split() for d in depends: # run dependencies first allDeps = getDepends(testParser, testRoot, d) + allDeps return allDeps # get list of questions to grade def getTestSubdirs(testParser, testRoot, questionToGrade): problemDict = testParser.TestParser(os.path.join(testRoot, 'CONFIG')).parse() if questionToGrade != None: questions = getDepends(testParser, testRoot, questionToGrade) if len(questions) > 1: print('Note: due to dependencies, the following tests will be run: %s' % ' '.join(questions)) return questions if 'order' in problemDict: return problemDict['order'].split() return sorted(os.listdir(testRoot)) # evaluate student code def evaluate(generateSolutions, testRoot, moduleDict, exceptionMap=ERROR_HINT_MAP, edxOutput=False, muteOutput=False, gsOutput=False, printTestCase=False, questionToGrade=None, display=None): # imports of testbench code. note that the testClasses import must follow # the import of student code due to dependencies import testParser import testClasses for module in moduleDict: setattr(sys.modules[__name__], module, moduleDict[module]) questions = [] questionDicts = {} test_subdirs = getTestSubdirs(testParser, testRoot, questionToGrade) for q in test_subdirs: subdir_path = os.path.join(testRoot, q) if not os.path.isdir(subdir_path) or q[0] == '.': continue # create a question object questionDict = testParser.TestParser(os.path.join(subdir_path, 'CONFIG')).parse() questionClass = getattr(testClasses, questionDict['class']) question = questionClass(questionDict, display) questionDicts[q] = questionDict # load test cases into question tests = filter(lambda t: re.match('[^#~.].*.testZ', t), os.listdir(subdir_path)) tests = map(lambda t: re.match('(.*).testZ', t).group(1), tests) for t in sorted(tests): test_file = os.path.join(subdir_path, '%s.test' % t) solution_file = os.path.join(subdir_path, '%s.solution' % t) test_out_file = os.path.join(subdir_path, '%s.test_output' % t) testDict = testParser.TestParser(test_file).parse() if testDict.get("disabled", "false").lower() == "true": continue testDict['test_out_file'] = test_out_file testClass = getattr(projectTestClasses, testDict['class']) testCase = testClass(question, testDict) def makefun(testCase, solution_file): if generateSolutions: # write solution file to disk return lambda grades: testCase.writeSolution(moduleDict, solution_file) else: # read in solution dictionary and pass as an argument testDict = testParser.TestParser(test_file).parse() solutionDict = testParser.TestParser(solution_file).parse() if printTestCase: return lambda grades: printTest(testDict, solutionDict) or testCase.execute(grades, moduleDict, solutionDict) else: return lambda grades: testCase.execute(grades, moduleDict, solutionDict) question.addTestCase(testCase, makefun(testCase, solution_file)) # Note extra function is necessary for scoping reasons def makefun(question): return lambda grades: question.execute(grades) setattr(sys.modules[__name__], q, makefun(question)) questions.append((q, question.getMaxPoints())) grades = grading.Grades(projectParams.PROJECT_NAME, questions, gsOutput=gsOutput, edxOutput=edxOutput, muteOutput=muteOutput) if questionToGrade == None: for q in questionDicts: for prereq in questionDicts[q].get('depends', '').split(): grades.addPrereq(q, prereq) grades.grade(sys.modules[__name__], bonusPic = projectParams.BONUS_PIC) return grades.points def getDisplay(graphicsByDefault, options=None): graphics = graphicsByDefault if options is not None and options.noGraphics: graphics = False if graphics: try: import graphicsDisplay return graphicsDisplay.PacmanGraphics(1, frameTime=.05) except ImportError: pass import textDisplay return textDisplay.NullGraphics() if __name__ == '__main__': options = readCommand(sys.argv) if options.generateSolutions: confirmGenerate() codePaths = options.studentCode.split(',') # moduleCodeDict = {} # for cp in codePaths: # moduleName = re.match('.*?([^/]*).py', cp).group(1) # moduleCodeDict[moduleName] = readFile(cp, root=options.codeRoot) # moduleCodeDict['projectTestClasses'] = readFile(options.testCaseCode, root=options.codeRoot) # moduleDict = loadModuleDict(moduleCodeDict) moduleDict = {} for cp in codePaths: moduleName = re.match('.*?([^/]*).py', cp).group(1) moduleDict[moduleName] = loadModuleFile(moduleName, os.path.join(options.codeRoot, cp)) moduleName = re.match('.*?([^/]*).py', options.testCaseCode).group(1) moduleDict['projectTestClasses'] = loadModuleFile(moduleName, os.path.join(options.codeRoot, options.testCaseCode)) if options.runTest != None: runTest(options.runTest, moduleDict, printTestCase=options.printTestCase, display=getDisplay(True, options)) else: evaluate(options.generateSolutions, options.testRoot, moduleDict, gsOutput=options.gsOutput, edxOutput=options.edxOutput, muteOutput=options.muteOutput, printTestCase=options.printTestCase, questionToGrade=options.gradeQuestion, display=getDisplay(options.gradeQuestion!=None, options))

homework_1_search/commands.txt

python pacman.py python pacman.py –layout testMaze –pacman GoWestAgent python pacman.py –layout tinyMaze –pacman GoWestAgent python pacman.py -h python pacman.py -l tinyMaze -p SearchAgent -a fn=tinyMazeSearch python pacman.py -l tinyMaze -p SearchAgent python pacman.py -l mediumMaze -p SearchAgent python pacman.py -l bigMaze -z .5 -p SearchAgent python pacman.py -l mediumMaze -p SearchAgent -a fn=bfs python pacman.py -l bigMaze -p SearchAgent -a fn=bfs -z .5 python eightpuzzle.py python pacman.py -l mediumMaze -p SearchAgent -a fn=ucs python pacman.py -l mediumDottedMaze -p StayEastSearchAgent python pacman.py -l mediumScaryMaze -p StayWestSearchAgent python pacman.py -l bigMaze -z .5 -p SearchAgent -a fn=astar,heuristic=manhattanHeuristic python pacman.py -l tinyCorners -p SearchAgent -a fn=bfs,prob=CornersProblem python pacman.py -l mediumCorners -p SearchAgent -a fn=bfs,prob=CornersProblem python pacman.py -l mediumCorners -p AStarCornersAgent -z 0.5 python pacman.py -l testSearch -p AStarFoodSearchAgent python pacman.py -l trickySearch -p AStarFoodSearchAgent python pacman.py -l bigSearch -p ClosestDotSearchAgent -z .5

homework_1_search/eightpuzzle.py

# eightpuzzle.py # ————– # Licensing Information: You are free to use or extend these projects for # educational purposes provided that (1) you do not distribute or publish # solutions, (2) you retain this notice, and (3) you provide clear # attribution to UC Berkeley, including a link to http://ai.berkeley.edu. # # Attribution Information: The Pacman AI projects were developed at UC Berkeley. # The core projects and autograders were primarily created by John DeNero # ([email protected]) and Dan Klein ([email protected]). # Student side autograding was added by Brad Miller, Nick Hay, and # Pieter Abbeel ([email protected]). import search import random # Module Classes class EightPuzzleState: """ The Eight Puzzle is described in the course textbook on page 64. This class defines the mechanics of the puzzle itself. The task of recasting this puzzle as a search problem is left to the EightPuzzleSearchProblem class. """ def __init__( self, numbers ): """ Constructs a new eight puzzle from an ordering of numbers. numbers: a list of integers from 0 to 8 representing an instance of the eight puzzle. 0 represents the blank space. Thus, the list [

Related Tags

Academic APA Assignment Business Capstone College Conclusion Course Day Discussion Double Spaced Essay English Finance General Graduate History Information Justify Literature Management Market Masters Math Minimum MLA Nursing Organizational Outline Pages Paper Presentation Questions Questionnaire Reference Response Response School Subject Slides Sources Student Support Times New Roman Title Topics Word Write Writing