One the Pi the /etc/init.d/raspi-vid now includes this key line:Ĭ:\apps\netcat\nc 192.168.0.90 443 |c:\apps\smplayer\mplayer\mplayer -vf geq=p (X\,Y )* (1-gt (mod (X/SW\,100 )\,98 ))* (1-gt (mod (Y/SH\,100 )\,98 )) -ontop -fps 27 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -Ĭ:\apps\netcat\nc 192.168.0.90 443|c:\apps\smplayer\mplayer\mplayer -vf geq=p(X\,Y)*(1-gt(mod(X/SW\,100)\,98))*(1-gt(mod(Y/SH\,100)\,98)) -ontop -fps 27 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -įor the full versions of the files, and more discussion about the switches I chose, go back to my previous article about screaming streaming on the Pi, and just substitute in these lines in the obvious place. W 9 = w 7*sqrt(7/9) = 640 * 0.935 ~ 560 pixelsĪnd that worked out! So a slightly smaller width gives us fewer pixels to have to calculate, and allows us to converge to real-time and have almost unnoticeable lag. I decided there was a small but noticeable difference between 7 fps and 9 fps. So I kept the pixels per second constant and calculated what area I would have to shrink the picture to to increase the fps to a value that gave me sufficient real-timeyness. Perhaps things will go better on PCs that don’t need the -vo gl switch of mplayer which I have to use on my Dell display. Because when you think about it, it’s got to do calculations for each and every pixel, which must introduce quite some overhead. I think my PC, a Dell Insipron 660, just can’t keep up at higher fps. For my desired 640 x 480 video I could get real-time video at about 7 fps (frame per second). Convergence is about 30 seconds and we have preserved the real-timeyness of the video!īut it as a series of compromises and tuning that got me there. Raspivid -n -o -t 9999999 -rot 180 -w 560 -h 420 -b 1000000 -fps 9Īnd, voila, my grid of black lines appears at 100 pixel intervals. I finally settled on this string for my mplayer: Or the convergence to real-time took too long. Then, when it did work, it lost the real-time feature that’s so important to us. A lot of the stuff I tried initially didn’t work. I searched for examples on the web and came across this very helpful discussion of how to use it, with examples. then I came across geq (general equation). Under -vf are different filters, non of which sounded very promising. I don’t know how to bring up the mplayer documentation in Windows, but on the Pi it’s justĪnd you’ll get a whole long listing. Though they don’t use that term, I soon was drawn to what sounded similar, a -vf (video filter) switch with post-processing capability. So I looked for a way to superimpose an image using mplayer. ![]() So if you can get control of what decides how to draw pixels in Windows we can draw our grid on the client side in Windows rather than on the encoder side on the Pi. Something decides what pixels to display, and this is true for every window, including mplayer. Logic dictates there should be another way And then after all that, my fear was that it would slow down the video to the point where we would lose the real-time aspect! So the barriers were many, the risk was great, the reward not that great. And then I would have needed to figure out openCV, which in turn might require programming in C , which I have only the most basic skills. ![]() Then there was cmake to master – I have no idea never having used it before. He provided one example of a hacked source file, but for raspistill, and I needed raspivid which is slightly different. And I did get it to compile, but it’s a lot of packages to bring down, and then I still needed to add in the openCV stuff. I began to bring down the source code for raspivid and raspistill, as outlined in this series of blog posts. Well, let’s talk about why I didn’t go the openCV route. Here I demonstrate how I have done it, and what compromises I had to make along the way. In the conclusion I mentioned that it would be nice to superimpose (overlay) a grid over that image, and speculated that openCV might be just the tool to do it. In my previous post I outlined how to get real-time video from your Raspberry Pi with its camera, and to make it somewhat robust.
0 Comments
Leave a Reply. |