I'm making a simple game, and i have an idea about one of the items for player. If player picks the item, the screen starts to wave (distort), but i can't do that. I found a shader which distorts the image, but it distorts it permanently and works only on one picture, not the whole scene. Also i tried to use Camera.SetReplacementShader, but everything becomes just light blue and that's all.
Any ideas appreciated!
The code of the shader is down below:
Shader "Custom/NewShader" {
Properties {
_MainTex ("Base (RGB)", 2D) = "transparent" {}
_SpeedX("SpeedX", float)=3.0
_SpeedY("SpeedY", float)=3.0
_Scale("Scale", range(0.005, 0.2))=0.03
_TileX("TileX", float)=5
_TileY("TileY", float)=5
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
float4 uv_MainTex_ST;
float _SpeedX;
float _SpeedY;
float _Scale;
float _TileX;
float _TileY;
struct Input {
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o)
{
float2 uv = IN.uv_MainTex;
uv.x += sin ((uv.x+uv.y)*_TileX+_Time.g *_SpeedX)*_Scale;
uv.x += cos (uv.y*_TileY+_Time.g *_SpeedY)*_Scale;
half4 c = tex2D (_MainTex, uv);
o.Albedo = c.rgb * 4;
o.Alpha = c.a * 4;
}
ENDCG
}
FallBack "Diffuse"
}
You can use Post Processing Effects to solve this. It is Unity's "visual enchancement" package, basically an instagram filter for your game camera. You can change the visuals of your game completely using this package.
Setup:
Go to Unity > Window > Package Manager
Select Unity Registry on the top and type in the search bar "Post Porcessing"
Download and Import the package
Go to a gameObject and add the component post-process Volume
And then go to your Main Camera and add the component post-process Layer
On post-process Volume create a New Profile and give the gameObject a new Layer ("PostProcessing" perhaps)
On the post-process layer (in your camera) again select the Layer you assigned
You are all set. You can now go back to the post-process Volume component and press "Add Effect" and play around with the various effects to create the one you want.
This works for both 2D/3D, if there is a camera this will work.
Related
I want to create a map pin. Sort of like a google map pin that will stay static to my 3d object as I rotate around my object.
right now, I am using a sprite but when I rotate around my 3d object, the sprite doesn't show on the other site, I only shows when I rotate my 3d object on the side that the sprite is on.
so what can I do to have my sprite stay static and see through the model when I rotate around it?
Think of a google map pin functionality (but on a 3d object, not on a flat surface)
PS. I also want to point out that I am viewing it in a 3D world, not 2D.
Try a textured quad which faces the camera. In your scene add a 3d-Object>Quad and position it above your 3d object. Make a new material and assign the albedo your Map Pin texture, set rendering mode to transparent, fade, or cutout if you have alpha values less than 1 (i recommend fade). Assign this material to your quad by dragging the material onto the quad gameobject in the inspector. Add a script to your quad called FaceCamera (or whatever you like), this script will update the forward of the quad's transform to always face the camera. You should now be able to rotate around your 3d object and have the map pin appear above it while facing the camera.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class FaceCamera : MonoBehaviour
{
// Update is called once per frame
void Update()
{
transform.forward = (transform.position - Camera.main.transform.position);
}
}
edit: To have the map pin always appear through your 3d object you should create a shader using create>shader>unlit shader and edit it so that for tags you have Tags { "RenderType"="Transparent" "Queue"="Transparent" } and after Pass but before CGPROGRAM you add ZTest Always and Blend SrcAlpha OneMinusSrcAlpha. You can also rename the shader something like MapPinShader. Assign this shader to your map pin material by going into the shader drop down unlit>'name of shader'.
Shader "Unlit/MapPinShader"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
LOD 100
Pass
{
ZTest Always
ZWrite On
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// make fog work
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
// apply fog
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
}
After adding an implementation of a PostProcessEffectRenderer to the Unity post-processing stack the effect works perfectly in the Unity Editor, but does not show in the built game.
Changes to build quality have no effect, effect does not show using maximum quality settings, building for Windows x86_64.
Grayscale.cs
using System;
using UnityEngine;
using UnityEngine.Rendering.PostProcessing;
[Serializable]
[PostProcess(typeof(GrayscaleRenderer), PostProcessEvent.AfterStack, "Custom/Grayscale")]
public sealed class Grayscale : PostProcessEffectSettings
{
[Range(0f, 1f), Tooltip("Grayscale effect intensity.")]
public FloatParameter blend = new FloatParameter { value = 0.5f };
}
public sealed class GrayscaleRenderer : PostProcessEffectRenderer<Grayscale>
{
public override void Render(PostProcessRenderContext context)
{
var sheet = context.propertySheets.Get(Shader.Find("Hidden/Custom/Grayscale"));
sheet.properties.SetFloat("_Blend", settings.blend);
context.command.BlitFullscreenTriangle(context.source, context.destination, sheet, 0);
}
}
Grayscale.shader
Shader "Hidden/Custom/Grayscale"
{
HLSLINCLUDE
#include "Packages/com.unity.postprocessing/PostProcessing/Shaders/StdLib.hlsl"
TEXTURE2D_SAMPLER2D(_MainTex, sampler_MainTex);
float _Blend;
float4 Frag(VaryingsDefault i) : SV_Target
{
float4 color = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.texcoord);
float luminance = dot(color.rgb, float3(0.2126729, 0.7151522, 0.0721750));
color.rgb = lerp(color.rgb, luminance.xxx, _Blend.xxx);
return color;
}
ENDHLSL
SubShader
{
Cull Off ZWrite Off ZTest Always
Pass
{
HLSLPROGRAM
#pragma vertex VertDefault
#pragma fragment Frag
ENDHLSL
}
}
}
After much trial and error I realised that this was caused by Unity excluding the hidden shader as it lacked a reference to anything in the game at build time. On build Unity will only include shaders either attached to a material in use in a scene or those added in project settings in the 'Always Included Shaders' array.
I tried both and it solved my problem, it has been suggested that creating a dummy object within your game referencing the hidden shader will work better as it leaves Unity to decide if it's needed in a scene or not. Either way this fixed it for me.
I need to set a variable of a shader without a material that wraps around it.
I'll explain the problem and why it's not like the question "How can I access shader variables from script?".
Problem
My shader is similar to this (99% of irrelevant boilerplate code removed):
Shader "Hidden/XShader"
{
Properties
{
_x ("", float) = 0
}
SubShader
{
Pass
{
float _x;
half4 frag(v2f i) : SV_Target
{
// "col" and "wpos" have been correctly defined
if (wpos.x >= _x)
{
col.r = 1;
} else {
col.r = 0;
}
return col;
}
}
}
}
This shader is set through the Edit->Project Settings->Graphics->Deferred option. It is the default shader that the main camera uses.
Now I need to set the _x value from code attached to the camera:
public class XCameraController : MonoBehaviour
{
public float x;
void Update()
{
<something>.SetFloat("_x", x);
}
}
The <something> placeholder would normally be a material, as SetFloat() is defined there. But the camera and shader do not have a material. The concept of material doesn't even apply to the default shader.
I've searched online and in the documentation for hours. I admit I failed and am at a loss here. I guess it must be simple but I can't find any documentation.
I don't expect an implemented solution, a pointer where I can find help will suffice!
But the camera and shader do not have a material. The concept of
material doesn't even apply to the default shader.
True but materials simply exposes all properties from a shader so it is relevant here since you want to change the shader properties.
You have a custom shader but that's not used to render a GameObject but the camera. A material is still needed to change the shader. If you don't want to use a material then you can use the Shader.SetGlobalXXX functions such as Shader.SetGlobalFloat("_x", 3) but it will change all the shader properties. This is unrealistic.
The proper way to do this is to create a temporary material you will use to modify the shader, change the shader properties, then update the shader the camera is using. To do this, you have to:
Find the shader or get a reference of the shader with a public variable:
Shader camShader = Shader.Find("Hidden/XShader");
Create material from the shader
Material camMat = new Material(camShader);
Modify the property as you wish
camMat.SetFloat("_x", 3);
Apply to the modified shader property to the camera
Camera.main.SetReplacementShader(camShader, "RenderType");
If you're manually rendering the camera then use Camera.main.RenderWithShader(camShader, "RenderType") instead of Camera.main.SetReplacementShader(camShader, "RenderType").
I'm making a 3D game in unity where the object should move forward and backward as the android device moves/accelerates in the Z axes. ie. When the player moves the devise in the direction of the +ve Z axis, the object should move forward, and when the player moves the devise in the direction of the -ve Z axis, the object should move backward.
This game is a multiplayer game, and the players will move in a large football field.
My idea to do this is using the accelerometer to calculate the acceleration of the device, then integrate the data of acceleration to get the device speed in the Z axis. and use the speed to move the device.
Using this equation
V2=V1 + ΔA . ΔT
Where
V2 : final velocity.
V1 : initial velocity.
ΔA : difference between the initial and final acceleration.
ΔT : difference between the initial and final time.
At first I tried to use kinematic equations to calculate the final speed, but I realized then that it can be only used when acceleration is constant. So a friend of me who studies physics differentiated this equation for me to use it when acceleration is variable.
I know that there will be some error in calculating the accurate displacement, and that the error will increase after the integration of acceleration, but this small percentage of error is okay for my application; I thought at first in using GPS instead of accelerometer but I found that GPS accuracy will be less than the sensors.
I know also that the error will be incredibly high after some time, so I reset the values of acceleration and velocity every 10 seconds. I'm also using a low-pass filter to reduce the noise of the sensor.
public class scriptMove : MonoBehaviour
{
const float kFilteringFactor = 0.1f;
public Vector3 A1;
public Vector3 A2;
public Vector3 A2ramping; // for the low-pass filter
public Vector3 V1;
public Vector3 V2;
public int SpeedFactor=1000; //this factor is for increasing acceleration to move in unity world
void resetAll()
{
Input.gyro.enabled = true;
A2 = Vector3.zero;
V1 = Vector3.zero;
V2 = Vector3.zero;
A2ramping = Vector3.zero;
}
// Use this for initialization
void Start()
{
InvokeRepeating("resetAll", 0, 10);
}
//http://stackoverflow.com/a/1736623
Vector3 ramping(Vector3 A)
{
A2ramping = A * kFilteringFactor + A2ramping * (1.0f - kFilteringFactor);
return A - A2ramping;
}
void getAcceleration(float deltaTime)
{
Input.gyro.enabled = true;
A1 = A2;
A2 = ramping(Input.gyro.userAcceleration) * SpeedFactor;
V2 = V1 + (A2 - A1) * deltaTime;
V1 = V2;
}
//Update is called once per frame
void Update()
{
getAcceleration(Time.deltaTime);
float distance = -1f;
Vector3 newPos = transform.position;
transform.Translate(Vector3.forward * Time.deltaTime * V2.z * distance);
}
}
The problem:
My code doesn't work always as expected when I move with the device;
Sometimes when I move forward (in the +ve Z axis of the device) the object moves forward also, but sometimes it doesn't move at all.
Sometimes when I'm still in my position the object moves alone by itself.
Sometimes when I move forward and suddenly stop, the object does not stop.
My questions:
Are those strange behaviors because of the accuracy of the device, or is there something I'm missing in my code.
If I'm missing something in my code, What is it?
I searched a lot about methods to get the most accurate position of the device, and I found that I can integrate GPS with accelerometer, how can I do this with my code in unity?
I don't know if you still need it but if anyone in the future need I will post what I found:
When I first used the Unity accelerometer I was thinking that the output was simply the device's rotation, and in a way is, but more than that it give us the acceleration but in order to have this value your must filter the gravity then you have the value.
I created a plugin for Android and get the Android's Accelerometer and Linear Accelerometer, the standard accelerometer give us a similar value of Unity accelerometer, the main difference is that is raw, and unity give us some refined output, for example if your game is Landscape unity automatically inverts X and Y axis, while the Android raw information don't. And the Linear accelerometer that is a fusion of sensors including the standard accelerometer, the output is acceleration without the gravity but the speed is terrible, while both (Unity and Android) accelerometer are updated every frame, the Linear accelerometer was updated every 4 to 5 frames what is a terrible rate for user's experience.
But going for Android plugin was great because it gave the light how to solve my problem of removing gravity from Unity Accelerometer, as you can find here:
https://developer.android.com/reference/android/hardware/SensorEvent.html
Under Sensor.TYPE_ACCELEROMETER
If we tilt the device, Unity Accelerometer gives you a value, for example 6, and while you hold in that position this is the value, is not a wave, if you tilt back really fast or really slowly it will give the value from 6 to 0 (supposing you move back to zero), what I wanted and accomplished with the code I'm sharing below is, when you turn it does a wave, returns the acceleration and back to zero, so is a acceleration deceleration curve, if you turn it really slow the acceleration returned is almost zero, if you turn it fast the response reflects this speed. If this is the result you are looking for you just need to create this class:
using UnityEngine;
public class AccelerometerUtil
{
public float alpha = 0.8f;
public float[] gravity = new float[3];
public AccelerometerUtil()
{
Debug.Log("AccelerometerUtil Init");
Vector3 currentAcc = Input.acceleration;
gravity[0] = currentAcc.x;
gravity[1] = currentAcc.y;
gravity[2] = currentAcc.z;
}
public Vector3 LowPassFiltered()
{
/*
https://developer.android.com/reference/android/hardware/SensorEvent.html
gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0];
gravity[1] = alpha * gravity[1] + (1 - alpha) * event.values[1];
gravity[2] = alpha * gravity[2] + (1 - alpha) * event.values[2];
linear_acceleration[0] = event.values[0] - gravity[0];
linear_acceleration[1] = event.values[1] - gravity[1];
linear_acceleration[2] = event.values[2] - gravity[2];
*/
Vector3 currentAcc = Input.acceleration;
gravity[0] = alpha * gravity[0] + (1 - alpha) * currentAcc.x;
gravity[1] = alpha * gravity[1] + (1 - alpha) * currentAcc.y;
gravity[2] = alpha * gravity[2] + (1 - alpha) * currentAcc.z;
Vector3 linearAcceleration =
new Vector3(currentAcc.x - gravity[0],
currentAcc.y - gravity[1],
currentAcc.z - gravity[2]);
return linearAcceleration;
}
}
Once you have this class, just create it into your MonoBehaviour:
using UnityEngine;
public class PendulumAccelerometer : MonoBehaviour
{
private AccelerometerUtil accelerometerUtil;
// Use this for initialization
void Start()
{
accelerometerUtil = new AccelerometerUtil();
}
// Update is called once per frame
void Update()
{
Vector3 currentInput = accelerometerUtil.LowPassFiltered();
//TODO: Create your logic with currentInput (Linear Acceleration)
}
}
Notice that the TODO on MonoBehaviour is to be implemented, is up to you create an algorithm how to handle this values, in my case I found really useful to create a Graphic output and analise my acceleration before write it.
Really hope it helps
The movement is based on acceleration, so it will be dependant on how quickly you rotate your device. This is also why the object does not stop when you do. Suddenly stopping your device is a lot of acceleration, which then gets added to the amount the object is translating, which causes it to move a much greater distance than you intend.
I think what may be easier for you is to use the attitude of the gyro rather than the userAcceleration. The attitude returns a quaternion of the rotation of the device.
https://docs.unity3d.com/ScriptReference/Gyroscope-attitude.html
(You'll have to do a bit of experimenting, because I don't know what (0,0,0,0) on the attitude is. It could mean the device is flat on a table, or that it is sideways being held in front of you, or it could simply be the orientation of the device when the app first starts, I don't know how Unity initialises it.)
Once you have that Quaternion, you should be able to adjust velocity directly based off of how far in either direction the user is rotating the device. So if they rotate +ve Z-axis, you move forwards, if they move more, it moves faster, if they move -ve Z-axis, it slows down or moves backwards.
Regarding the GPS coordinates, you need to use LocationService for that.
http://docs.unity3d.com/ScriptReference/LocationService.html
You'll need to start LocationServices, wait for them to initialise (this bit is important), and then you can query the different parts using LocationService.lastData
I am trying to do the same thing as you. It is not trivial to get device's linear acceleration using just one sensor. You will need to implement a solution using both the accelerometer and the gyroscope (sensor fusion). Google has an android specific solution which behaves differently according to how sophisticated your device is. It uses multiple sensors as well as low/high pass filters (see Android TYPE_LINEAR_ACCELERATION sensor - what does it show?).
Google's Tango tablet should have sensors to address such issues.
If you want to get accelerometer data in Unity, try:
public class scriptMove : MonoBehaviour{
private float accelX;
private float accelY;
private float accelZ;
void Update(){
accelX = Input.acceleration.x;
accelY = Input.acceleration.y;
accelZ = Input.acceleration.z;
//pass values to your UI
}
}
What I am currently trying is to port Google's solution to Unity using IKVM.
This link might be helpful too:
Unity3D - Get smooth speed and acceleration with GPS data
I would like to draw a GameObject in front of all other components in my project and GUI Textures as well.
I created a second Camera and set Depth and Layer but it still not work. I hope you can help me to find the error or something I forgot.
Here is my MainScript which is drawing a simple Texture:
using UnityEngine;
using System.Collections;
public class MainScript : MonoBehaviour
{
Texture2D texture;
// Use this for initialization
void Start()
{
texture = new Texture2D(Screen.width, Screen.height);
for (int y = 0; y < texture.height; y++)
{
for (int x = 0; x < texture.width; x++)
{
texture.SetPixel(x, y, Color.blue);
}
}
texture.Apply();
}
void OnGUI()
{
GUI.DrawTexture(new Rect(0, 0, texture.width, texture.height), texture);
}
}
I also created two cameras and a GameObject which displays a GUI Texture. The Texture is visible in the preview screen but on runtime the Texture which is drawing in the MainScript is foregrounded.
I made two more Screenshots of my Camera Objects. See here:
I can also supply the whole project for you. It is just a basic test project.
Here is the link to the Project in Google Drive: Download
set depth of camera2 to camera1.depth+1, Clear Flags of camera2 to depth only and Clear Flags of camera1 to skybox. Uncheck GUILayer at Camera2 and check GUILayer in camera1. That should do it...
You cannot draw 3D objects in front of the GUI elements, OnGUI code always renders in top of everything.
To achieve this you can use Render Textures (Unity Pro only): have two cameras in your scene, place your 3D objects in one camera, render this camera to a texture, and finally use that texture as the source of a GUI.DrawTexture().