I'm working on a simple Paddle Game project nowadays. In the begining, everything's looking good. But, when I complete my level design and publish my game, I see gameobjects in my scene are moving so slow. I think, I pass over the Unity3D's physics limit. If I try some Math instead of Unity3D's Colider, I can finish my first project. (I tried to use Separating Axis Theorem, but I can't handle x and y coordinates on Unity3D.)
I need your help. Thanks a lot for your time. (And If I can handle this problem, I will share my project on the internet, for beginner people like me.)
In my project, I achieved this simulation with using BoxColider, but because of Unity3D physics limit, I don't want to use colider in my project.
You can't not detect collision. I'd recommend using Unity's colliders for that, since they're really not bad. The first thing I notice is that in your first simulation, you have 4 box colliders, when instead you should just have one. Use the OnCollisionEnter event (on the spheres) to reflect them off your box.
Related
I'm just wondering how to make shots in my game more realistic. I mean there are two main ways in implementing shots. First: throw a small projectile and let it detect collisions. Second: use raycast. But in real world (and apparently I noticed this in PUBG) bullets fly really quick but not immediately. Here is why you should aim near your target if it's far away from you to hit it. Because if you aim right where it is it will move and your bullet will miss the target.
I'm just curious if any of you, guys have a nice solution to this problem. Also I wish to find a way to use raycast not every frame. In such things like detecting if you will actually hit the wall when shooting. If you have any good ideas how to implement spread and recoil for different types of weapons I will be happy to read them.
Blue point is bullet point of frame. and in very frame send a new raycast to check cross any human. like red line in this picture.
and bullet path use unity built-in Physics system to do it.
In all the scripts i used so far when changing in the script the walk speed or adding animation clip it didn't effect it at all.
To change the character speed i need to change it in the Third Person Character (Script) > Move Speed Multiplier.
And to change animation or to add animation i need to go to the Animator window and add a new state and in the state to use the HumandoidWalk then set the state as default or in the script to use this with Play like Play("Walk")
Then what all the properties in the scripts why the speed and animation and others never effect it ? (Not talking about Nav Mesh Agent or Character Transform if needed).
For example i have a script that can accept Walk anim and then i select the HumandoidWalk but that will make the character not walking at all. Only if i make the state in the animator window then it will walk.
It's not only one specific script but in others too.
I see in many places users use Animation or _animation with Play("Walk") and i to make the player move and use animation i need to use Animator or _animator.
Then what is the difference in scripts in unity between the Animation and Animator windows ? What should i use to make the character in this case ThirdPersonController to walk with animation and not just move ?
For exmaple when using waypoints i want when running the game the enemies to start walking/patrolling atuaomtic so i make new state in animator waindow with HumanoidWalk and then in the script specific for a enemy i use the Play("Walk").
Complexity and backwards compatibility.
Basically, when Unity was just created as a product, it was a pretty basic game engine, and a lot of systems that were necessary for game development were not very advanced. Then, a need for something with more capabilities arisen, and in a lot of cases, Unity decided to create a completely new system from scratch, and leave the old one as well.
Now we have legacy and new systems for GUI, for animation, for input handling, for particles and probably for something else I'm forgetting right now. However, it doesn't mean that old systems are completely useless: quite often, you want to use a simple and straightforward system without all the bells and whistles.
New animation system allows you to create great characters, but it also takes a lot of time to learn and set up. If you have a simple animated mesh that just needs to do the same animation on loop, I would use the old system; if I had a complicated character with several layers of different behaviours and animations created to blend with one another, I would use the new animator.
By the way, the same holds for UI: while the old system is pretty bad for working on player-looking UI, it's still widely used for quick prototypes and all kinds of debug menus.
I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.
I am building a game in XNA 4.0, where a player moves about a 2 dimensional (vertical perspective) map consisting of blocks. My issue is creating proper collision between the payer and the blocks (basic game physics.) The player moves more than 1px per frame, so .Intersects() just isn't enough, I need physical contact collision that can function in a gravity environment. The current version I currently have is a piece of garbage and only works occasionally.
Basically, all that the collision system needs to do is stop gravity when the player lands on a block, and provide some decent physics when the player hits blocks (movement in that direction ceases). The idea behind my current solution is to move the next Position around until it finds a clear spot, but it doesn't work well. I have an idea why, just have no idea how to do it properly.
I know there must be a better way to do this. What would be the best method of making this kind of collision work properly?
Thanks
This does the job perfectly, you just need to sort through a few syntax errors. http://go.colorize.net/xna/2d_collision_response_xna/
hello i exporting a blender.fbx file into my assets folder in unity and it was working perfectly and i positioned it in front of the camera but when i play it the gun looks like its been taken apart whats going on?!if this helps i separated some parts so i could use it for animation cause it looks like all the animation parts are separate is this a unity thing or a blender thing?
http://www.flickr.com/photos/87198010#N07/7985274177/in/photostream
Two options:
It could be that your animation is incorrect.. remove animation to test..
The position of the child objects, barrel and trigger, may be incorrect in relationship to the parent.. I don't know for a fact, just a few thinks that come to mind.
Let us know when you find out.
When you load a mesh into unity, unity will as a standard add an animation module to it, even if you don't have any animations yet. this may be what is making the gun bug.
If there is an animation component on it, remove said component and try again.
Also it might be that different parts of the gun are at different scales or rotations, before you export from blender click and click 'apply rotation and scale', and then you export.