HI all,
I'm developing a software to control light show ( through DMX protocol ), i use C# and wpf to develop my main software (.net 4.0)
To help people preview their show, i would like to make a live 3D visualizer...
First, i thought that i could use wpf 3D to make the visualizer, but i need to work with light ..
My main application should send property ( beam angle, orientation (X,Y), position(X,Y), Brush ( color,shape,effect)) to the 3D visualizer
But i would like to be able to move light (position in the scene) by mouse during execution and had value in return...
So ..
Does XNA is the easiest way to doing that ?
Can you help me for that :
Generating light (orientation , bitmap like filter in front of light )
Dynamically moving object with mouse and get position in return
Dynamically add or remove fixture
All of your advice, sample, example are very welcome ... I don't espect to have a perfect result at the first time but i need to understand the main concepts for doing that
Thank You !!
XNA does not contain any functionality for managing a "scene" - you will have to implement that yourself. For example: you might make a Light class containing the information about your light (position, orientation, etc), and then have a List<Light> of them, which you update and render yourself.
I will now assume that you have a 3D model of a "Light" (as in: the metal box containing the lightbulb) and also a 3D model of a stage. And that you can figure out how to render them - there are plenty of tutorials online for simple model rendering in XNA. Here is a starting point.
So your 3rd requirement ("Dynamically add or remove fixture") should be fairly simple once you can render things. Just add and remove Lights from your List of them based on user input. See the Input namespace.
And your 2nd requirement ("Dynamically moving object with mouse and get position in return") should also be simple. If you want your user to move lights by clicking and dragging, just keep track of the mouse position between frames and apply that as an adjustment to the clicked Light's position (or rotation).
To figure out which Light the user clicks in the first place, a good starting point is the Picking Sample.
I am assuming here that the user will click the Light (metal box) itself to move/rotate it. If you would rather have the user click and drag the endpoint of the light (the spot it projects) - that is more difficult. One idea that comes to mind: Intersect a ray from your Light with the stage to find the centre of the projected spot. At that point draw a dummy "handle" object (like a sphere) that the user can click and drag around. When the user finishes dragging, figure out the new orientation for the Light to make that the new centre.
Finally your 1st requirement ("Generating light (orientation , bitmap like filter in front of light )") is the tricky one. My understanding of this is that you want a way to draw the endpoint of the beam of light on your stage model? If so, what you are looking for is called Projective Texture Mapping. Presumably you will have a circular texture for basic lights, and perhaps other textures for gobos.
The quick-and-dirty way to do this would be to draw your stage model, once for each light, with additive blending (so that each light added with the other lights and the black had no effect), with a colour set to whatever you want the light colour to be, and with a black and white texture (a white circle on black background) drawn with TextureAddressMode.Clamp, with a shader that draws the texture with projective texture mapping which is set up with the light as the projection point.
Related
In my 2d Unity project, I have a Canvas with an Image that I want for a Background.
I have 2 gameObjects in front of this background. But no matter how much fiddling I do with Pos Z, Sorting Layers, or hierarchy sorting, the image is always in front of the objects.
Gif above shows in 3d mode that even though the image is clearly behind these objects, it will always appear over them if they overlap.
Hierarchy:
Main
Camera (Inspector: https://i.imgur.com/Q5a52cf.png)
BackgroundCanvas (Inspector: https://i.imgur.com/m9Pxr6B.png)
BackgroundImage (Inspector: https://i.imgur.com/jTx7pEW.png)
Object1 (Inspector: https://i.imgur.com/YcClEhk.png)
Object2
Any advice to rescue me from this madness is much appreciated.
Set the sprite renderer's transform z value to 0 instead of 100
If that does not solve, please specify camera properties also, so I can try to recreate the exact setup.
Try clicking on Layers -> edit layers, inside sorting layers you can change the order grabbing layer, everything upper appears behind in the camera.
You could create a layer called Object
Assign it to the game objects.
Create an object camera
culling mask -> object layer
depth bigger than you current main camera.
Set it to Projection -> Orthographic
Clear flags -> solid colors.
canvas Render Mode -> Screen Space - Camera and assign the Render Camera to be the Object Camera
Inspector tab of the object or background.
Sprite Renderer.
Additional Settings.
Sorting Layer.
change it to a different layer.
Had this same issue and was able to fix it with these steps:
In canvas settings change Screen Space Overlay to Screen Space Camera
Set the camera variable to the one you are using for your scene.
I figured out a workaround. I basically created a VisualElement inside the UI Builder and set a render texture to the background. Then I added an extra camera to my project to view all the sprites that needed to be on top. That camera feeds the render texture, so now everything that camera sees is forced to be on top of the UI Document as the background of that VisualElement. If you want control over the whole screen, just set the VisualElement position to absolute and max out its dimensions. If your game doesn't have a fixed aspect ratio it might cause some stretching, but other than that I cant really tell the difference. Sorting layers for the UI Documents are broken and unity needs to work on that. This is the best option I've found. Hope this helps.
I had the same problem and I fixed it by attaching the camera to the canvas which is screen space and finally changing the sorting layer of my object to -1.
how can I create Menu in my application , I use canvas , but Gear VR Camera dosent see it .
is there way , to use button in gear application
3d text appear but not canvas text
Never use screen space ui with VR, switch to world space and either place it somewhere in the scene (as a "real" object) or parent it to the camera so it is always in the center of the viewport.
ChanibaL is right, canvas can not be used in screen space.
Last time, when I design a menu in vr. I add sphere out of the camera, with a full black color. And put the ui buttons inner the sphere as a menu.
I think this may help for you.
More about UI design in vr, I advice to read this post UI for VR
Hi i am new to directx c#. i have a problem in one project. i draw two cubes which is one after another (ie x and y same location z is different), but the problem is when i view the front cube it is transparent and back cube is visible through front cube, i checked the transparency, no transparent level has been set. cullmode=null,Can anyone suggest what was the problem in tat?
And I think that the pixel of back cube overlaps with the front cube , how to overcome this?
here the screen shots..
Front Facing: http://postimg.org/image/6irstpv75/
Top View:http://postimg.org/image/o7ktw54h3/
Welcome :)
Please, consider adding tags to your post (programming language, "DirectX" etc) Without knowing which language you use (edit: c#... you should write it in tags... Soo, which framework? SharpDX? SlimDX? :)), i cannot be more specific.
Looks like you dont use DepthBuffer: you draw your distant cube first and closer cube after it, so it override existing pixels on BackBuffer.
I am making a WPF program with the possibility to modify data graphically in 3D. In order to give the user the option to select multiple graphical objects at the same time, I want to implement a selection rectangle. (Just like the one in windows explorer.) A common functionality in programs like this one is to have 2 different functions for the selection rectangle, and that the user can somehow choose which of the methods should be used.
Any object that is partially or completely inside the rectangle is selected.
Only objects that are completely inside the rectangle are selected.
The 2nd method is straight forward by using the bounding box of each object, and check if it is inside the rectangle. The 1st one on the other hand, seems to be quite some work. All my graphical objects are complicated 3D figures, and can be rotated by the user in any way. At the moment I am unable to find any other way than checking if any of the triangles in the mesh of any of the objects cross my 2D rectangle, and that can be quite time consuming.
I have little experience with WPF 3D, but I have done this before in OpenGL. Then I could tell OpenGL to draw a specific area of the screen, and the collect a list of objects that was visible in the specific area. All I needed to get the functionality I wanted was about 5 lines of code.
I guess my question is this:
Is there a way to do this with WPF 3D, similar to the OpenGL approach?
If not, is there any other smart way to find all objects (Visual3D) in a viewport that is partially behind a 2D rectangle?
I refuse to believe I am the only one with this kind of problem, so I hope a clever mind can point me in the right direction.
Regards,
Sverre
Thank you for your answer!
The 2D-rectangle is just in front of the camera and extending infinitely forward. I want to get any object that is partially or completely inside that frustum.
The camera we are using is an orthographic or perspective projection camera (System.Windows.Media.Media3D.ProjectionCamera). The reason we are not using the matrix camera is that we are using a 3rd party tool that does not support the matrix camera. But I am sure there is a way to get the matrix from a projection camera as well, so that is hopefully not the problem.
In theory your solution sounds like just what we need, but I am not sure how to proceed. Do you have any links to sample-code, or can you give some more hints on how to actually implement this?
Btw: Since we are working with WPF, we do not have direct access to DirectX. At least that’s what we have concluded after some research. You mention use of the z-buffer, which we haven’t been able to access through WPF. If you know a way to access the z-buffer, it’s greatly appreciated! This is of-topic, but we have struggled to disable the z-buffer for some time, but have given up…
Best regards,
Sverre
Is your intersection region a 2d rectangle or a frustrum based at a 2d rectangle and extending infinitely forward (or perhaps to some clipping limit)? If it can be construed as a viewing frustrum, then you can leverage the existing capabilities of the graphics system to render the scene using a Camera View and Projection that corresponds to your originating rectangle, with all lighting and shading disabled and colors chosen specifically to 'tag' the different objects in your scene. This means you can use the graphics hardware to perform the clipping/projection as a 'rendering' operation, then simply enumerate the pixel values as 'tags' to determine the objects present in the rectangular view.
If you need to restrict selection to an actual 2d slice (or a very shallow frustrum), you can use the Z-buffer (if you can get access to it) to exclude tagged pixels that are outside the Z range of your desired selection frustrum.
The nice thing about this approach is that you probably already have the Camera matrix (it's the same matrix used for your window for selection) and only need to change the Projection matrix to be a sub-set of the viewing window.
A 'smart' way would be to transform the rectangle into a box using the Camera's matrix
And then do a intersection of all the objects and the box.
I have a 3D Modell of a house, where the roof is invisible so that the rooms can be seen
(like here)
But (for now) I have no textures and each surface has the same color, e.g.,
var myMaterial = new DiffuseMaterial (new SolidColorBrush(myColor))
If I view it in a WPF Viewport3D, I want to be able to differentiate between the surfaces.
e.g., I want to see when the floor ends and the wall starts.
This should be possible by lighting the object. I already tried:
Ambient light doesn't work, because all surfaces would look equally colored:
myViewport3D.Children.Add(new ModelVisual3D(){Content = new AmbientLight(Colors.White)})
And if I use directional light and stick its position to the moving camera, some surface normals are sometimes nearly perpendicular to the camera/light and so are nearly black, which looks even more unnatural.
So what is a good way to distinguish the surfaces of a single-colored 3DObject in a WPF Viewport3D?
Edited after user "jdv" wrote his comment
Personally, I find that this can be accomplished the "best" by a combination of two lights.
A dim (maybe 30% lit) ambient light. This always shows all surfaces.
A directional light, at about 80% white, which follows the camera, but is off by 30 degrees or so. I find a light "over the camera's left shoulder" tends to be what people often expect.
Also, if your surface normals are not always going to be correct, you can use a third light - another directional light pointing the opposite direction of the first. This will light the back faces of the surfaces if you've got inappropriate normals.
Since you can use 2 light sources, I would try using a dim light to act as an ambient background light and a somewhat stronger directional light to give contrast to the surfaces.
I am not a 3d expert, but would think of it this way:
In a dark room (no ambient light), with a flashlight (the directional light), you will see dramatic differences based on the angle of the surface to your flashlight. Add some ambient lighting and the harshness of those differences decreases as your ambient light source gets stronger, until at some point, it overpowers the flashlight and everything appears evenly lit.
Good luck, hope you are able to achieve the effect you are after.