I have recently started using Photon to create multiplayer games, my tutorial is linked here: https://www.youtube.com/watch?v=93SkbMpWCGo . As you can read from the title of this problem, the players movements are not syncing.
here if the code for my player:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using TMPro;
using Photon.Pun;
public class playerControls : MonoBehaviour
{
[SerializeField] private Rigidbody2D body;
[SerializeField] public Transform bodyTransform;
public float speed;
private float horizontalInput;
public PhotonView view;
// Update is called once per frame
void Update()
{
if(view.IsMine){
if (Input.GetKeyDown(KeyCode.RightArrow) || Input.GetKeyDown(KeyCode.D))
{
bodyTransform.localScale = new Vector3(1.3f,1.3f,1.3f);
}
else if (Input.GetKeyDown(KeyCode.LeftArrow) || Input.GetKeyDown(KeyCode.A))
{
bodyTransform.localScale = new Vector3(-1.3f,1.3f,1.3f);
}
horizontalInput = Input.GetAxis("Horizontal");
body.velocity = new Vector2(horizontalInput * (float)speed, body.velocity.y);
if(Input.GetKeyDown(KeyCode.UpArrow) || Input.GetKeyDown(KeyCode.W)){
body.velocity=new Vector2(body.velocity.x,6);
}
}
}
}
I have also used the Photon Transform View, however it still does not seem to work.
Everything else works fine, you can move each player individually. You can join rooms. And the players spawn. The only problem however is just that the player sync does not work. Any feedback would be appreceated -Jake
Sidenote it is playable here: https://chartreusefinancialdifferences.jake-is-theis.repl.co/
PhotonTransformView is just a component that can be synced over the network. You also need PhotonView that does actual sync and drag-and-drop your PhotonTransformView to the Observed Components list. So basically the PhotonView is most important component for networking, PhotonTransformView is just a serializing/deserializing values from transform. If you add any class implementing IPunObservable you need to drag them to the Observed Components list to work
Related
Hi!
After the discussion with Ruzihm in the comments. I've now created a simple version of my game to better ask the question I'm having.
The question now is, since I'm not able to manually create a connection to the testObject field in the inspector. How do I now tell Unity to use my instantiated objects while the game is running?
And is this a good solution for a RTS game that may have 100s of Units active at a time? The end goal here is to apply this force to a radius around the cursor. Which I was thinking of using Physics.OverlapSphere
Here's the minimal scenario of what I have:
New Unity scene
Attached the InputManager to the main camera.
Created a capsule and a plane.
Added ApplyForce to the Capsule
Created a prefab from the capsule and deleted it from the scene.
In the InputManager I added the ability to press space to Instantiate a capsule with the ApplyForce script attached..
Drag the capsule prefab to the InputManager "objectToGenerate"
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace GL.RTS.Mites
{
public class InputManager : MonoBehaviour
{
public GameObject testObject;
public ApplyForce onSpawnTest;
public GameObject objectToGenerate;
void Start()
{
onSpawnTest = testObject.GetComponent<ApplyForce>();
}
void Update()
{
if(Input.GetKeyDown(KeyCode.Space))
{
Instantiate(objectToGenerate);
}
if (Input.GetMouseButton(0))
{
onSpawnTest.PushForward();
}
}
}
}
The ApplyForce script that I attach to the Capsule:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace GL.RTS.Mites
{
public class ApplyForce : MonoBehaviour
{
public float moveSpeed;
Rigidbody rb;
void Start()
{
rb = GetComponent<Rigidbody>();
Debug.Log("A Mite has spawned!");
}
public void PushForward()
{
rb.AddRelativeForce(Vector3.up * moveSpeed * Time.deltaTime);
Debug.Log("A force of: " + moveSpeed + " is being added.");
}
}
}
Well, you are creating your new instances of your object, but your input manager immediately forgets about them (note that you do nothing with the return value). The InputManager only knows about the ApplyForce that was created in its Start (and then interacts with it depending on mouse input) and your ApplyForce script knows nothing about any InputManager. So, it should come as no surprise that only the first instance reacts to the mouse input.
So, something has to be done to your InputManager and/or your ApplyForce. Your InputManager could remember the instances it creates (which isn't enough, because what if for example, a map trigger creates new player controllable units) or it could go looking for units each time.
Your ApplyForce could register with the InputManager when they are created, but then you would need to loop through the units and find out which ones are under the mouse, anyway.
Since you only want to select ones based on what is near or under your cursor and only when input occurs and not like every frame, I would go with the simplest approach, just letting your InputManager find the units when it needs them. Something like below, explanation in comments:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace GL.RTS.Mites
{
public class InputManager : MonoBehaviour
{
public GameObject testObject;
public ApplyForce onSpawnTest;
public GameObject objectToGenerate;
private Camera mainCam;
// which layers to consider for cursor detection
[SerializeField] LayerMask cursorLayerMask;
// how big for cursor detection
[SerializeField] float cursorRadius;
void Awake()
{
// cache main camera
mainCam = Camera.main;
}
void Update()
{
if(Input.GetKeyDown(KeyCode.Space))
{
Instantiate(objectToGenerate);
}
if (Input.GetMouseButton(0))
{
Collider[] colls = FindCollidersUnderCursor();
// check each collider for an applyforce and use it if present
foreach( Collider coll in colls)
{
ApplyForce af = coll.GetComponent<ApplyForce>();
if (af != null)
{
af.PushForward();
}
}
}
}
Collider[] FindCollidersUnderCursor()
{
// find ray represented by cursor position on screen
// and find where it intersects with ground
// This technique is great for if your camera can change
// angle or distance from the playing field.
// It uses mathematical rays and plane, no physics
// calculations needed for this step. Very performant.
Ray cursorRay = mainCam.ScreenPointToRay(Input.mousePosition);
Plane groundPlane = new Plane(Vector3.up, Vector3.zero);
if (groundPlane.Raycast(cursorRay, out float cursorDist))
{
Vector3 worldPos = cursorRay.GetPoint(cursorDist);
// Check for triggers inside sphere that match layer mask
return Physics.OverlapSphere(worldPos, cursorRadius,
cursorLayerMask.value, QueryTriggerInteraction.Collide);
}
// if doesn't intersect with ground, return nothing
return new Collider[0];
}
}
}
Of course, this will require that every unit you're interested in manipulating has a trigger collider.
I published a game named Rotate Ball Pro last week. But some people told me that playing the game was not so comfortable.
I used unity and C# for coding. My game levels has labyrinths and a sphere on it. The sphere has rigidbody component but the labyrinths have not. So you can control the labyrinth by tilting the phone and thus the sphere can roll. But you must hold the phone parallel to the ground to keep the labyrinth balanced. For example you can not play by lying on back or holding the phone with any angle. So this makes playing uncomfortable.
I want to to keep the labyrinth balanced in whatever the rotation of the phone is when the game starts. I searched web and tried many things but could not solve it. Can anyone help me?
Here is the explanation of the problem with an image: Problem
Here is the direct game link: Rotate Ball Pro
Here is my code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[System.Serializable]
public class NewMovementScript : MonoBehaviour
{
public float multiplier = 0;
void Start()
{
Input.gyro.enabled = true;
Screen.sleepTimeout = SleepTimeout.NeverSleep;
}
void FixedUpdate()
{
var dir = Vector3.zero;
dir.x = Input.acceleration.x * -1 * multiplier;
dir.y = Input.acceleration.z * multiplier;
dir.z = Input.acceleration.y * multiplier;
transform.eulerAngles = new Vector3(dir.z, 0f, dir.x);
}
}
Input.acceleration is the position change between the last and current frame and as you noted already doesn't take the initial state into account.
This also is barely related to rotations at all.
I think for your usage you would rather simply use Input.gyro.attitude!
Example from the docs
public class Example : MonoBehaviour
{
// Rotate the object to match the device's orientation
// in space.
void Update()
{
transform.rotation = Input.gyro.attitude;
}
}
If your object uses a rigidbody you might want to rather use
public class Example : MonoBehaviour
{
private Rigidbody rigidbody;
void Awake ()
{
rigidbody = GetComponent<Rigidbody>();
}
void FixedUpdate()
{
rigidbody.MoveRotation(Input.gyro.attitude);
}
}
I am trying to make a zombie wave game and current have a Prefab for my enemies. If I have the prefab be in the scene when I hit run, they are attached to the NavMesh and track the player perfectly. I want to achieve this but with the enemy being spawned from an empty GameObject so I can get the waves spawning in. I have achieved them Spawning but they have the error,
"SetDestination" can only be called on an active agent that has been placed on a NavMesh.
UnityEngine.AI.NavMeshAgent:SetDestination(Vector3)
EnemyAI:Update() (at Assets/Scripts/EnemyAI.cs:25)
Here is my EnemyAI Script
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;
public class EnemyAI : MonoBehaviour
{
public float lookRadius = 10f;
Transform target;
NavMeshAgent agent;
public GameObject Player;
void Start()
{
agent = GetComponent<NavMeshAgent>();
}
// Update is called once per frame
void Update()
{
float distance = Vector3.Distance(Player.transform.position, transform.position);
if (distance <= lookRadius)
{
agent.SetDestination(Player.transform.position);
}
}
}
And my spawning script, which is attached to an empty game object,
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Spawning : MonoBehaviour
{
public GameObject prefab;
public int CountofCubes;
private IEnumerator coroutine;
public float spawnRate;
IEnumerator Start()
{
while (true)
{
for (int i = 0; i < CountofCubes; i++)
{
Instantiate(prefab, new Vector3(Random.Range(-25.0f, 25.0f), 0.5f, Random.Range(-25.0f, 25.0f)), Quaternion.identity);
}
yield return new WaitForSeconds(spawnRate);
}
}
}
Any help would be great thanks!
I had the same issue and I don't have an explanation but only a workaround:
In the Start fucntion, I added:
navAgent = GetComponent<NavMeshAgent>();
navAgent.enabled = false;
// Invoke is used as a workaround for enabling NavMeshAgent on NavMeshSurface
Invoke("EnableNavMeshAgent", 0.025f);
And the EnableNavMeshAgent function is just :
private void EnableNavMeshAgent ()
{
navAgent.enabled = true;
}
If I set the invoke delay to a value less than 0.025 second the error keep going but for 0.025 I only have it twice and the behaviour is the one I wanted after that.
Some reasons this might happen:
The agent is not on level with any navmesh i.e. can be too far above or below. You want it to be "close to the NavMesh surface". Use raycasting on the floor to position the agent or add a rigidbody and drop it from above the floor. After you do this you might need to disable and enable the agent.
Moving the transform yourself rather than using Wrap function. There's property where you can check if the simulated and actual position are in sync.
Corrupted navmesh so you might need to re-bake it.
It is essentially trying to tell you your agent is not on the mesh so it cannot find a path. Try playing with where you're placing the agent.
I kind of remember running into a similar problem a while back and problem was I forgot to bake the NavMesh in Window > AI > Navigation. Just in case.
camera transform
I am trying to move my camera based on the players' movements on Y axis in Unity.
However, it does not work...
What did I do wrong? I have attached image of my script (C#) here.
and, Yes, I did attach this script with Main Camera.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CameraController : MonoBehaviour {
GameObject player;
// Use this for initialization
void Start () {
this.player = GameObject.Find("cat");
}
// Update is called once per frame
void Update () {
Vector3 playerPos = this.player.transform.position;
transform.position = new Vector3(
transform.position.x, playerPos.y, transform.position.z);
}
}
Make the player GameObject public and just drag and drop your player in the inspector in unity see if that works? Are you getting any exceptions? Also add Debug.Log (player.transform.position.ToString ()) to see if it is showing the right values. Are you sure you player object name is cat and not Cat, it is case sensitive. Check on those things and let me know if you figured it out!
In short, I have a very simple multiplayer game. It's the Roll A Ball game (Unity3D tutorial). So right now I have the players etc spawning perfectly and everyone is able to control their own balls perfectly fine.
But here's the problem: I've got a default Main Camera. Since it's only the local player itself that needs to see it, I figured there's no point in trying to spawn a seperate camera for each player on the server.
However, to make the camera follow the player, I need to attach it the player gameobject. Obviously I can't attach it to the player prefab as it's a clone the camera needs to follow. But since the player is being spawned by the Network Manager component, I have no idea on how to refer to this clone.
What I've tried myself:
public class CameraController : NetworkManager
{
public GameObject playerPrefab;
public Transform target;
private Vector3 offset;
public override void OnServerAddPlayer(NetworkConnection conn, short playerControllerId)
{
GameObject player = (GameObject)Instantiate(playerPrefab, new Vector3(0, 0.5f, 0), Quaternion.identity);
target = player.transform;
NetworkServer.AddPlayerForConnection(conn, player, playerControllerId);
}
void Start()
{
offset = transform.position - target.position;
}
void LateUpdate()
{
transform.position = transform.position + offset;
}
}
But this resulted in:
Which I find extremely odd since as you can clearly see, there's no NetworkIdentity component on the NetworkManager object. I've been trying A LOT of things for the past 4 hours now and I just can't do it. So now I'm hoping you guys can help me out.
Edit: This is how the Network Manager normally spawns a player. As you can see, there's no code for it:
I had the same issue and figured out the followig solution. Seems like you already got a solution, but maybe it is interesting to share some possible ways for other people in the same situation.
This is a way to do it without a camera attached to the prefabs.
I'm using a NetworkManager to instantiate Player-prefabs. (Same as you)
I solved the problem of finding references to the clone objects by letting the clones tell the camera, who they are (or which transform belongs to them).
The Player has the following script:
using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
public class PlayerController : NetworkBehaviour {
public override void OnStartLocalPlayer()
{
GetComponent<MeshRenderer>().material.color = Color.blue;
Camera.main.GetComponent<CameraFollow>().target=transform; //Fix camera on "me"
}
void Update ()
{
if (!isLocalPlayer)
{
return;
}
var x = Input.GetAxis("Horizontal") * Time.deltaTime * 150.0f;
var z = Input.GetAxis ("Vertical") * Time.deltaTime * 3.0f;
transform.Rotate (0, x, 0);
transform.Translate (0,0,z);
}
}
On my default main Camera (there is no camera attached to the player prefab, just the default camera) I put the following script on. It takes a target which I initialised with the prefab using the inspector.
using UnityEngine;
using System.Collections;
public class CameraFollow : MonoBehaviour {
public Transform target; //what to follow
public float smoothing = 5f; //camera speed
Vector3 offset;
void Start()
{
offset = transform.position - target.position;
}
void FixedUpdate()
{
Vector3 targetCamPos = target.position + offset;
transform.position = Vector3.Lerp (transform.position, targetCamPos,smoothing * Time.deltaTime);
}
}
After starting the game, each clone tells the camera who he is, so the target changes to the individual clients clone with this line from the Player's Script:
Camera.main.GetComponent<CameraFollow>().target=transform; //Fix camera on "me"
This way you don't need to create one camera per instance of player-prefabs (I'm not sure if this makes big differences in performance) and you don't have to deactivate all cameras which don't belong to your client.
If you host the game in the editor you can see that there is just 1 camera instead of one camera per connected client (like when you attach it to the prefab).
I think this is a good use of this method, you can use it to put things in it, which should be applied to the Local Player only.
public override void OnStartLocalPlayer()
{
}
I tried by starting the game in the editor and in a build and it seems to work well.
I would add a camera to the prefab and then write a player script like this:
using UnityEngine.Networking;
public class Player : NetworkBehaviour
{
public Camera camera;
void Awake()
{
if(!isLocalPlayer)
{
camera.enabled = false;
}
}
}
I've not really worked with networking but what if you do this after you spawn the local player
Camera.main.transfor.SetParent(the transform of the local player here);
As I understand the problem each separate instance of the game has a main camera.
Thanks to Rafiwui's point into the right direction, I've finally managed to get it working. All I had to do was adept his code a bit. The end result was:
public Camera camera;
void Awake()
{
camera.enabled = false;
}
public override void OnStartLocalPlayer()
{
camera.enabled = true;
}
Thanks A LOT to you all for helping me out! This has been quite a day for me.