a friend of mine and I have been fighting to figure out how to extend the custom white balance that we could succesfully set on the live view, to the saved JPG image. We are using the Canon SDK 2.1.34 and a EOD 600D camera, programming in C#.
Apparently this is the same problem/procedure connected to applying the Custom Picture modes to the saved image. For us the manual is cryptic. Has anyone a good example on how to achieve that?
Thank you!
Federico
Duplicate from my earlier attempts and question at this post
As far as I know this is not correctly supported (nor documented) in the EDSDK, nor have I encountered public workarounds. It also proves hard to get a good contact in Canon (even from within the company) that can help us all out. EOS Utility internally can do it but uses undocumented PTP calls (that could be recorded and reverse engineered).
Unfortunately, your best bet is to either
shoot RAW and do custom whitebalancing in post. The as-shot WB is just random.
approximate white balance using Color Temperature and custom temperature shift. These can be pushed into JPGs.
Related
I am trying to develop an AR application in unity using the new AR Foundation.
This application would need to use two features:
It needs to use a large amount of tracking images
It needs to properly identify the tracked image (marker) (Only one image will be visible at the same moment)
What I need is dynamically generate the fiducial markers preferably with the tracking part same for all and only with specific part carrying id of the marker. Preferably the AR code would be similar to the ARToolkit One from this image:
Do these markers work well with ARfoundation (abstraction over ARCore and ARKit)?
Lets say I ll add 100 of these generated codes into the XRImageIs it possible that AR Foundation image targets get "confused" and mixup tracked images? Could in theory i use QR codes as Markers and simply code ID information into the QR code?
In a project I searched for a good way to implement a lot of different markers to identify a vast amount of different real world objects. At first I tried QRCodes and added them to the Image Database in ARFoundation.
It worked but sometimes markers got mixed up and this already happened by using only 4 QRCodes containing words ("left", "right", "up", "down"). The problem was that ARFoundation relies on ARCore, ARKit, etc. (Depending on the platform you want to build.)
Excerpt from the ARCore guide:
Avoid images with that contain a large number of geometric features, or very few features (e.g. barcodes, QR codes, logos and other line art) as this will result in poor detection and tracking performance.
The next thing I tried was to combine OpenCV with ARFoundation and use ArUco Marker for detection. The detection works much better and faster than the Image Recognition. This was done by accessing the Camera Image and using OpenCVs marker detection. In ARFoundation you can access the camera image by using public bool TryAcquireLatestCpuImage(out XRCpuImage cpuImage).
The Problem of this method:
This is a resource-intensive process that impacts performance...
On an iPad Pro 13" 2020, the performance in my application dropped from constant 60 FPS to around 25 FPS. For me, this was a too serious performance drop.
A solution could be to create a collection of images with large variations and perfect score, but I am unsure how images with all these aspects in mind could be generated. (Probably also limited to 1000 images per reference database, see ARCore guide)
If you want to check if these markers works well in ARCore , goto this link and download the arcoreimg tool.
The tool will give you a score that will let you know if this image is trackable or not. Though site recommends the score to be 75 , i have tested this for score of as low as 15. Here is quick demo if you are interested to see. The router image in the demo has a score of 15.
So, recently I managed to land myself a Kinect v2 project (hurray!) on my Industrial Placement which is supposed to detect if a person is wearing the correct PPE or (personal protective equipment to you and me).
Included in this scope is detecting if the person is wearing the correct:
Hazard Protection Suit
Boots
Gloves
Hat
A beard mesh! (Only if they have a beard)
I have only recently begun working with the Kinect v2 Sensor so you will have to forgive any ignorance on my part. I am new to this whole game but have worked through a fair few examples online. I guess I am just looking for advice/sources on how best to solve the problem.
Sources online seem scarce for trying to detect if a person is WEARING something. There are a couple of parts to this project:
Split up human into components (hands, feet, head, body) while retaining colour. This part I imagine would be best done by the Kinect SDK.
Give a percentage likelihood that the person's hand part is wearing gloves, head part is wearing hat etc... I've seen some ideas for this including graphics packages such as OpenCV.
I'm looking for advice concerning both parts of this project. Feel free to ignore the waffling below, but I thought it best to post some of my own ideas first.
Idea (and worked example) 1 - Point clouds
I have done some preliminary projects involving basic detection of humans. In fact, grabbing the VS project from this source: http://laht.info/record-3d-video-with-kinect-v2-and-play-it-back-in-the-browser/ I have managed to create a .ply file out of human beings. The problem here would be trying to split the .ply 3D image up to identify the suit, hat, gloves, boots (and I guess) beard. I imagine this is a fairly complex task.
Of the 3 ideas however, I'm coming back around to this one. Splitting up a 3D .ply image and trying to detect the PPE could turn out to be more accurate. Would need advice here.
Idea 2 - Blob Detection
Using this: http://channel9.msdn.com/coding4fun/kinect/Kinect--OpenCV--WPF--Blob-Tracking I pondered whether there would be a good way of splitting up a human into a colour picture of their "hand part", "head part", "body part" etc with the Kinect v2 SDK. Maybe I could then use a graphics package like OpenCV to test if the colours are similar or if the logo on the suit can be detected. Unfortunately, there is no logo currently on the gloves and boots so this might not turn out to give a very reliable result.
Idea 3 - Change the PPE
This isn't ideal but if I can't do this another way, I could perhaps insist that logos would need to be put on the gloves and suit. In this worst case scenario, I guess I would be trying to just detect some text in 3D space.
Summary
I'd be grateful for any advice on how to begin tackling this problem. Even initial thoughts might spark something :)
Peace!
Benjamin Biggs
Our website allows people to upload images. However, we don't allow watermarked images, yet many do still get uploaded by users. Is there some software/code that can (at least in most cases) catch images that do have watermarks such as logos/images? I'm not sure if there is some sort of a standard.
You can do it via image classification.
Basically, train a CNN(Convolutional neural Network) model by feeding in some images with watermark and some without watermark in it and then use this model to judge the probability of watermark in any new image.
You can apply transfer learning on some existing pre-trained models(as of today inception v3 is the best out there) which can be retrained for your specific classification purpose.
For example this link shows how to do it to identify whether an image is that of a sunflower or a daisy or a rose.
https://www.tensorflow.org/tutorials/image_retraining
Here is a quick 5 minute tutorial about building a tensorflow image classifier: https://youtu.be/QfNvhPx5Px8
To detect any kind of logo on an image would be quite complicated. You would need something similar to face recognition, and a lot of AI...
To make it reasonably efficient you would need a library of logos to look for, and know where they are applied on the images. If the logo is always in the same place, you could just mask out the pixels where it would be, and calculate how close it is to the pixels of the logo. If logos varies in size and position, it gets more complicated.
You can't automatically detect a watermark. The best thing to do is make it real easy for others to report images that have a watermark and once reported, put them in a holding state where they aren't displayed until it's verified they either do or don't have a watermark.
With certain kind of AI it would be possible, at least with certain probability.
More precisely said it IS possible provided that you CAN define what the watermark is,
which is the greatest problem. Generic watermark detection is virtually undetectable,
consider logo at billboard at photo etc.
I am making an object tracking application. I have used Emgucv 2.1.0.0
to load a video file
to a picturebox. I have also taken the video stream from a web camera.
Now, I want
to draw an unfilled square on the video stream using a mouse and then track the object enclosed
by the unfilled square as the video continues to stream.
This is what people have suggested so far:-
(1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester
said that there are .NET wrappers, but I had a hard time finding any.
(2) DxLogo sample
DxLogo – A sample application showing how to superimpose a logo on a data stream.
It uses a capture device for the video source, and outputs the result to a file.
Sadly, this does not use a mouse.
(3) GDI+ and mouse handling - this area I do not have a clue.
And for tracking the object in the square, I would appreciate if someone give me some research paper links to read.
Any help as to using the mouse to draw on a video is greatly appreciated.
Thank you for taking the time to read this.
Many Thanks
It sounds like you want to do image detection and / or tracking.
The EmguCV ( http://www.emgu.com/wiki/index.php/Main_Page ) library provides a good foundation for this sort of thing in .Net.
e.g. http://www.emgu.com/wiki/index.php/Tutorial#Examples
It's a pretty meaty subject with quite a few years and different branches of research associated with it so I'm not sure anyone can give the definitive guide to such things but reading up neural networks and related topics would give you a pretty good grounding in the way EmguCV and related libraries manage it.
It should be noted that systems such as EmguCV are designed to recognise predefined items within a scene (such as a licence plate number) rather than an arbitory feature within a scene.
For arbitory tracking of a given feature, a search for research papers on edge detection and the like (in combination with a library such a EmguCV) is probably a good start.
(You also may want to sneak a peek at an existing application such as http://www.pfhoe.com/ to see if it fits your needs)
I am working on the development of a Massively Multiplayer Online Role Playing Game (MMORPG) in .NET using C# and Silverlight. One of the features that has been requested for this game is to allow players to upload their own avatars.
Rather than displaying the uploaded images in their raw forms, we want to convert the images to a cartoon form--in other words to cartoonize the image.
Several sites which can accomplish such a task are listed at http://www.hongkiat.com/blog/11-sites-to-create-cartoon-characters-of-yourself/
I realize that these sites are applying an image filter to create the cartoon image. Frankly, I have no reasonable idea what these cartoon image filter algorithms might look like or if there is anything already available in C# or .NET that I could use. If there are no libraries available, I am curious how difficult it would be to roll my own.
This is a minor game feature so I am not interested in devoting a week or more of coding time to implement this. However, if I can code up what I need within a day, then it is probably viable.
At this point, I am primarily looking for guidance as to
what is possible
what libraries are already available (preferably as open source)
where i may find additional information
any other advice or guidance you may be able to provide
Thank you in advance!
Apparently you apply a Gaussian Blur filter to the image. Then you sharpen the image. Perhaps the AForge libraries would help you out.
I've used code from the image processing lab on code project before with success. (update: here's the library it uses)
Christian Graus also has written a whole series on GDI image processing which I found useful (and has the effects listed above for filtering capabilities).