Hello, I have uploaded the OS to the latest revision (UDOObuntu 2 RC1). What about video recording from the analog source? Does an update is needed? is /dev/video0 the analog signal input? Trying with gstreamer the following error occurs: GST_DEBUG=2,*imx*:9 gst-launch-1.0 imxv4l2videosrc device=/dev/video0 num-buffers=2 ! jpegenc ! filesink location=sample.jpeg 0:00:00.183500770 12020 0x12dec80 INFO imxv4l2videosrc ../src/v4l2src/v4l2src.c:398:gst_imx_v4l2src_get_caps:<imxv4l2videosrc0> get caps filter (NULL) 0:00:00.184031100 12020 0x12dec80 INFO imxv4l2videosrc ../src/v4l2src/v4l2src.c:409:gst_imx_v4l2src_get_caps:<imxv4l2videosrc0> get caps video/x-raw, format=(string)I420, width=(int)[ 16, 2147483647 ], height=(int)[ 16, 2147483647 ], interlace-mode=(string)progressive, framerate=(fraction)[ 0/1, 100/1 ], pixel-aspect-ratio=(fraction)[ 0/1, 100/1 ] 0:00:00.184634762 12020 0x12dec80 INFO imxv4l2videosrc ../src/v4l2src/v4l2src.c:398:gst_imx_v4l2src_get_caps:<imxv4l2videosrc0> get caps filter (NULL) 0:00:00.185042426 12020 0x12dec80 INFO imxv4l2videosrc ../src/v4l2src/v4l2src.c:409:gst_imx_v4l2src_get_caps:<imxv4l2videosrc0> get caps video/x-raw, format=(string)I420, width=(int)[ 16, 2147483647 ], height=(int)[ 16, 2147483647 ], interlace-mode=(string)progressive, framerate=(fraction)[ 0/1, 100/1 ], pixel-aspect-ratio=(fraction)[ 0/1, 100/1 ] Setting pipeline to PAUSED ... 0:00:00.187223078 12020 0x12dec80 LOG imxv4l2videosrc ../src/v4l2src/v4l2src.c:208:gst_imx_v4l2src_start:<imxv4l2videosrc0> start 0:00:00.187611409 12020 0x12dec80 ERROR imxv4l2videosrc ../src/v4l2src/v4l2src.c:137:gst_imx_v4l2src_capture_setup:<imxv4l2videosrc0> VIDIOC_S_STD failed 0:00:00.187905074 12020 0x12dec80 ERROR imxv4l2videosrc ../src/v4l2src/v4l2src.c:212:gst_imx_v4l2src_start:<imxv4l2videosrc0> capture_setup failed 0:00:00.188147405 12020 0x12dec80 WARN basesrc gstbasesrc.c:3584:gst_base_src_activate_push:<imxv4l2videosrc0> Failed to start in push mode 0:00:00.188408404 12020 0x12dec80 WARN GST_PADS gstpad.c:994:gst_pad_set_active:<imxv4l2videosrc0:src> Failed to activate pad ERROR: Pipeline doesn't want to pause. Setting pipeline to NULL ... Freeing pipeline ... v4l2-ctl --all Driver Info (not using libv4l2): Driver name : mx6s-csi Card type : i.MX6S_CSI Bus info : platform:2214000.csi Driver version: 3.14.56 Capabilities : 0x04000001 Video Capture Streaming Video input : 0 (Camera: ok) Video Standard = 0x0000b000 NTSC-M/M-JP/M-KR Format Video Capture: Width/Height : 0/0 Pixel Format : '' Field : Any Bytes per Line: 0 Size Image : 0 Colorspace : Unknown (00000000) Crop Capability Video Capture: Bounds : Left 0, Top 0, Width 0, Height 0 Default : Left 0, Top 0, Width 0, Height 0 Pixel Aspect: 0/0 Crop: Left 0, Top 0, Width 0, Height 0 Any hint is welcome Thank you
Hi Flavio, could you post the exact command which you used for gstreamer? I also tried to access the analog video in with the gstreamer-imx package. Therefore see also my post: http://udoo.org/forum/threads/problems-using-gstreamer-for-analog-video-in.3375/ Have you already managed to get it run? Thank you
Hello, this is the cli command I have used. GST_DEBUG=2,*imx*:9 gst-launch-1.0 imxv4l2videosrc device=/dev/video0 num-buffers=2 ! jpegenc ! filesink location=sample.jpeg Sorry but I have no updates on this issue
Friends, I haven't answered before because I haven't had time. We are aware of these problems and my colleagues are working to solve this. As soon as I have solved it you'll be the first to know.
I have tested the camera output, however I resorted to creating a custom application because the camera outputs a format which the existing gstreamer plugins can't convert. Given that there is no hw support for mpeg/h264 on the imx6sx, I think encoding the video stream for recording/saving may be too CPU intensive. In my opinion the most practical use for the camera output will be for rendering to a display.
Could you tell me how exactly you tested it? I want to create a network stream not to save or record it, or is there no difference regarding the CPU performance? With custom application you mean, writing your own program, e.g. in C or C++? If yes, could you give me any hints about libraries or other stuff I need to do that for myself? Every information you give me, would help me. Thank you very much!
I wrote some C code to do this via the v4l2 interface, I plan to put the code on github although it make take a week or so because it needs further work. I think your going to struggle to stream the output as the camera output is YUV (32 bit) therefore each frame is around 1.3 Mb and trying to re-encode that to h264 at 30 fps on a single A9 will be a feat in it self. In fact initially in the my test code I was doing a few memcpy's per frame and this brought the CPU to 90% just to render to screen.
Further to my last reply, another option to explore would be to use a usb camera that outputs h.264 and steam using rtps.
Just to let you know, I'm using Logitec c920, which supports h.264 without any problems on my Neo. It shows up as /dev/video1. I have successfully used streamer to capture stills with it in my project. Sent from my SAMSUNG-SM-G920A using Tapatalk
Thanks for this information. The i.MX 6SoloX is equipped with NEON and on GitHub I found some source code for h264, where they say, it supports the NEON feature. So I think, when using this HW-acceleration, it should be no problem, even with the single A9, don't you agree jas-mx? I already forked the openh264 code, but am still occupied in successfully using it. But nevertheless my problem is, that I up to now have not managed it to use the AVDC at all, regardless how...
Even with NEON support you may not get the performance throughput. Regardless of the VADC issue, you can easily test by reading a file which contains YUV ((32bit) images (720x480) into memory and feeding these into the h264 encode function. This should give you an idea of how many frames per second can be encoded.
Thanks for that hint. I will definitely try this. Could you nevertheless give me some lines how to use the v4l2 interface in C?