-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathproposal.html
78 lines (78 loc) · 7.35 KB
/
proposal.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<h1 id="title">Title</h1>
<p>Parallelized Conflict-based Search for Multi-Agent Pathfinding</p>
<h1 id="url">URL</h1>
<p><a href="https://jsongcmu.github.io/parallel_CBS">https://jsongcmu.github.io/parallel_CBS</a></p>
<h1 id="summary">Summary</h1>
<p>We are going to parallelize the Conflict Based Search (CBS) algorithm used to solve Multi-Agent Pathfinding (MAPF) problems on a multi-core CPU platform.</p>
<h1 id="background">Background</h1>
<p>CBS is a two-level algorithm. At the bottom level, it utilizes the A* algorithm to find paths for individual agents. At the top level, it creates a binary tree (known as the constraint tree) which is used to select which collisions to resolve amongst the agents. The root of this tree contains an instance of the problem where all agents naively plan a shortest path to their goal. If no collisions occur in this instance, then the problem is solved. Otherwise, the first collision between any two agents is selected and two subtrees are created. One subtree contains the same problem instance but prevents the first agent from being at the collision location at the collision timestep, and the second subtree is identical but prevents the second agent from being at the collision location at the collision timestep. The figure below gives a visualization of the Constraint Tree.</p>
<p><img src="images/CBS.png" alt="img"></p>
<p><em>Credits: Jiaoyang Li. 16-891: Multi-robot Planning and Coordination</em></p>
<p>CBS has shown massive potential in finding optimal solutions to the MAPF problem. However, it is notorious for being slow and as such many sub-optimal algorithms are chosen as they perform much faster. Our goal is to improve the performance of the CBS algorithm by leveraging parallelization. The constraint tree creates clear separation between its nodes allowing for parallelization to take place at each subtree/node.</p>
<h1 id="challenge">Challenge</h1>
<p>At first glance the problem may seem trivial to parallelize: simply run each subtree in parallel. However, the coordination of determining which nodes should be run poses a challenge. The constraint tree is itself performing a search to find the constraints that result in the optimal configuration of paths for each agent. Thus, simply executing each node when it is created will quickly lead to inefficiency as we may expand down a subtree that has a cost higher than the optimal cost. As a result, any expansions down this subtree is wasted as none of the computation on this subtree is actually assisting with moving towards the solution. Similarly, ensuring that once a solution is found no more computation is performed also presents a challenge, as an individual node may take a while to be processed. Lastly, as we have seen in homework 3 and 4, parallelizing tree structures can lead to poor load balancing, hence avoiding such situations will be critical in our implementation. The performance of CBS and A* are both memory-bound due to the large number of memory accesses need to be made for path planning, collision checking, and cost computation. Each subtree should in theory have good locality as there will be a lot of reuse, however structuring the implementation correctly will be critical in ensuring that we can exploit this locality.</p>
<h1 id="resources">Resources</h1>
<p>We need access to computers with multiple cores. Primarily, we will be working on GHC computers to develop and test our code. We may need access to PSC as well to test how scalable our code is. We will be implementing our code from scratch, but the CBS algorithm is described here: </p>
<p><a href="https://www.sciencedirect.com/science/article/pii/S0004370214001386">G. Sharon, R. Stern, A. Felner, and N. R. Sturtevant, “Conflict-based search for optimal multi-agent pathfinding,” Artif. Intell., vol. 219, pp. 40–66, 2015.</a></p>
<h1 id="goals-and-deliverables">Goals and Deliverables</h1>
<p>Planned Goals:</p>
<ul>
<li>Develop single-threaded CBS algorithm in C++<ul>
<li>Implement low level path finding search</li>
<li>Implement high level conflict search</li>
</ul>
</li>
<li>Develop parallelized CBS algorithm in C++<ul>
<li>Achieve correctness while running on the GHC machines</li>
<li>Complete search in shorter time than single-threaded version</li>
</ul>
</li>
<li>Set up environment to evaluate algorithms for correctness and performance<ul>
<li>Characterize number of agents single-threaded CBS can accomodate</li>
<li>Characterize number of agents multi-threaded CBS can accomodate</li>
</ul>
</li>
</ul>
<p>Stretch Goals:</p>
<ul>
<li>CBS has many additional heuristics that can help improve the performance even further, implementing a variety of these may help to further improve speedup</li>
<li>Modify algorithm to consider multiple collisions per node, creating more informed constraints to improve search performance</li>
</ul>
<p>Performance:</p>
<ul>
<li>We must first implement and benchmark the performance of the single-threaded application to give specific numbers, but we included the characterization of CBS on unspecified hardware as a reference. The figure below shows the typical success rate for an increasing number of agents with a computtion time limit of 1 minute. Currently, a single threaded CBS implementation is only expected to solve the given problem instance about 20% of the time when there are 120 agents. By parallelizing CBS, we can increase search speed, which should increase the success rate for a given number of agents, or achieve the same success rate for a larger number of agents. As a preliminary performance goal, we want to increase the number of agents the system can accomodate by 25%; this will be measured at 50% succeess rate. To do this, we will measure how many agents is required before our single-threaded system reaches a success rate of only 50% and then compare this to the number of agents our multi-threaded system can accommodate before reaching 50%. The goal is a 25% increase in number of agents. This will show that our search speeds are signficiantly faster, and extend CBS performance enough to make it viable for systems with more robots.
<img src="images/performance_graph_example.png" alt="img"></li>
</ul>
<h1 id="platform-of-choice">Platform of Choice</h1>
<p>Multi-agent systems are centeralized or decenteralized. In centeralized applications, a centeral server does the majority of the computation, which includes executing CBS. In a decenteralized system, each agent handles its own computation and planning, such as executing CBS. This means that CBS may be run on a powerful, centeral server, or on a compute and power limited mobile platform. We want our parallel implementation of CBS to work on both centeralized and decenteralized networks, so a multi-core CPU platform makes the most sense: we can't assume the agents will have access to GPUs, so we prefer to avoid CUDA.</p>
<h1 id="schedule">Schedule</h1>
<table>
<thead>
<tr>
<th>Date</th>
<th>Tasks</th>
</tr>
</thead>
<tbody>
<tr>
<td>April 3-7</td>
<td>- Implement single-threaded A* and CBS</td>
</tr>
<tr>
<td>April 10-14</td>
<td>- Naïve multi-threaded CBS</td>
</tr>
<tr>
<td>April 17-21</td>
<td>- Analyze performance and identify bottlenecks<br>- Project Milestone report</td>
</tr>
<tr>
<td>April 24-28</td>
<td>- Improve performance of parallel CBS</td>
</tr>
<tr>
<td>May 1-5</td>
<td>- Finalize improvements <br> - Perform detailed analysis <br> - Project Report due <br> - Project Poster Presentation</td>
</tr>
</tbody>
</table>