Running a demo on an SSD
By default, the depthai_demo.py script loads the SSD
To run it, just run the script without any extra arguments, this will load the templates and launch the video by running the following:
Running the demo on YOLO V3
Run it with the following command:
python3 depthai_demo.py -cnn yolo-v3
Neural network frames per second on YOLO V3 (openVINO) were fairly stable at 2 frames per second. Pretty fast considering how slow YOLO v3 can be.
I have compared YOLOV3 before and when I ran from a compiled darknet source, analyzing the same machine can easily take 18 seconds.
Using YOLOV3 via the openCv DNN module (which is greatly optimized) can take about 3 seconds.
So when I say it’s pretty fast, I really mean it, 2 frames per second is basically half a second per frame, pretty good!
Depth detection testing
Checking it with a distant object, Z read 1.72 meters. I measured the distance with tape and it was about 1.80 meters. So it went really well, please also note, I may not have followed the exact trajectory of the camera, so I could be out a few inches.
Checking the distance from the camera to my face, I measured 55 centimeters, and as you can see from the screenshot, it read 62 centimeters, so good results overall.
Overall, it was a good buy, would have liked to avoid the customs surprise, but fair enough.
In terms of hardware, it seems to be really good, the cameras are good and the case itself is really good, because from what I can see so far, it works really well, so I look forward to continuing to try new things with it.
Did you find this article useful?
Share your comments and experiences !, tell us what works and what doesn’t work for you, do you have one? Or are you thinking of getting one?
Make sure you give applause for this post (about 50 is a good number!) If you liked this post and want to see more.
And stay up to date and read more articles about my adventures with the January-d camera, make sure you follow next time!