Existing Processing Resources are Implied
Given that this research project will "...develop an edge server-based AI application," but apparently not the edge computing platform, what assumptions can the offerors make about the resources of the edge computing platform? This is critical because artificial intelligence applications for video analysis generally require specialized Graphics Processing Units (GPUs) in addition to standard Central Processing Units (CPUs) found in business processing computers. Parallel algorithms applied simultaneously to multiple video sources will also require a number of GPUs that is proportional to the number of video sources (not necessarily one-for-one, but it should be obvious that the processing capabilities of a single device are are not infinite).
Regarding the resources of the edge computing platform, it is assumed that the offeror will need to purchase an off-shelf edge server from the open market with plenty of CPU and GPU power to process video and text data from at least three nearby intersections, assuming each intersection has four cameras and one signal controller. If additional sensors (radar, lidar, …) are proposed, there may be additional costs in sensor purchase and installation, and additional requirements in data transfer bandwidth and edge sever processing power. As long as latency requirements are met for real time signal phasing application, fusing additional sensors’ data is welcome addition. Another requirement is that the communications between the edge server and the intersections are via 5G high speed Internet rather than direct point-to-point communication, this removes differences in distance induced latency.