...that deal with pure data
You are not logged in.
Been a while... my webcam -> gridflow -> OSC -> SuperCollider patch is working really well in Linux. But now a problem -- I need to make this available to some students who are running Windows. [#camera] doesn't work in Windows (unless something changed since the last stable release).
So then I thought, the first time I tried to use gridflow, I couldn't figure out how to use [#camera] so I ended up doing [metro] --> [gemdead] --> [pix_video] --> [#from_pix]. Then I found... in Windows, no Gem integration ("couldn't create" [gemdead] and [#from_pix]).
... which pretty well hoses a major project for the semester.
So I'm evaluating alternatives. Main question for here is: Is there any way to get webcam input into gridflow under Windows? (I'm guessing no.)
If not, what would be the next best thing?
The gridflow objects I'm using are:
[#fold +] and [#redim] to sum the pixels
Is it feasible to replace these with Gem or other objects?
Would appreciate some ideas, soon if possible -- if I'm going to get this project off the ground, I need to do it quickly.
Patch attached. (Also uses mrpeach OSC objects, i.e., pd-extended.)
i don't understand what output are you expecting from the frame diff thing. i'm not a gridflow user but if you give me more details we can try to work out a gem solution. by the way you can write to mathieu (see on pd-list) for gridflow related question, i'm sure he'll get back to you in a short time
Last edited by guido (2012-03-01 19:46:18)
Thanks for the reply.
The outline of the patch is:
- Get the video image from the camera.
- Reduce the resolution by a factor of 4.
- (Optionally) blur.
- Convert to grayscale.
- [#motion_detection] is a frame-difference calculator, with some noise reduction inside. I don't know how the noise reduction works. If something is moving in one part of the image (especially edges with contrasting brightness), then the grayscale values of those pixels will be different between frames and that area will be brighter in the motion_detection result. But if an area is static, the pixel values between successive frames will not change much, making a darker result.
- Then I use [#slice] iteratively to get 25 sub-frames (a 5x5 division).
- For each slice, [#moment] gives the centroid x and y (average of the coordinates, weighted by the pixel value -- so if most of the white stuff is at the top left, x and y will tend toward the top left).
- I'm also summing the pixel values within each slice to estimate the amount of motion -- so I can tell the difference between, e.g. two areas of motion in opposite corners vs motion in the center.
Let me know if you need more detail.
I also posted on gridflow-dev, which was pretty responsive in the past but I haven't heard anything from there yet.
i get the overall structure of the patch but i'm still wondering what kind of output are you looking for. are you trying to find out which of the 25 sections is recording some action? the [#moment] object make me think that you are doing something more complex and specific than that.
Last edited by guido (2012-03-04 15:11:17)
[#moment 1] is just a centroid calculator, nothing more than that. I'm attaching a screenshot of an example, where the only pixel that's lit up is at x=2, y=1. [#moment 1] returns a two-item grid: y, x.
I'm not doing any checking within pure data whether there is or is not any movement in a slice. I'm just getting centroid x and y from [#moment 1] and summing the pixel values for a raw "amplitude" (wrong term) of movement. I'm sending a separate OSC message to SuperCollider for each slice with those three values: 25 OSC messages per frame.
Centroid, slicing the frame and summing the pixels -- I don't know how to do that in Gem.
Hm, summing, maybe [pix_mean_color]. And frame difference, [pix_movement2] (I only need grayscale).
Oh wait... is the centroid [pix_blob]?
Still no idea about the slicing.
Made some progress -- actually, fairly convinced I could do the analysis on the slices -- but I'm stuck on this point.
In gridflow, I could use #t and #var to pass the same grid multiple times -- so I could get one image from the camera and crop (slice) the same grid 25 times.
In gem (if I'm reading the help patches correctly), if I pass [pix_video] --> [pix_movement] --> [pix_crop], the cropping will be done only when that frame is being rendered -- no way to crop the same frame several times.
From the gridflow patch, I'm getting about eight frames per second (not great, but enough for the triggering work that I'm doing in supercollider). So, if I wanted to get 25 slices eight times per second, I would need to set the gemwin frame rate to 200 :-O and I'm guessing that would not be very successful.
So I guess my options are:
- Reduce the number of slices (and thereby the frame rate). With nine slices, I could get away with 72 frames per second. This might be okay for this purpose, but I hoped to avoid that.
- Is there yet a third image framework in pd that will let me do this?
- Beg the gridflow-dev list for an update where the camera works in Windows (which may be a lot to ask).
Any other options I haven't thought of?
I hope not to be a pest, but I've got some students waiting to be able to use this thing and I'm stuck...
Thanks bunches --
Hm, I might be getting just about there. Just wanted to run this patch by some folks who have more gem experience.
It *seems* to be producing reasonable x, y and size values from pix_blob. I haven't tried to do the slicing yet (the logic is basically there in my bigger patch, just out of time and energy for tonight). The size is quite small in magnitude, but it does go handsomely to 0 when there is not much motion.
But, I suppose there might be some mysterious thing that I haven't got quite right (being completely nooby wrt gem). So if someone wouldn't mind offering another pair of eyes, I would much appreciate.