So I've recently encountered this issue where running my AT Command on sending SMS message which returns a +CMS ERROR: 305 from my AT+CMGS= command. Upon further inspection I've discovered that there seems to be a limit to the number of characters for the message body to contain (160 char max from my testing). As a workaround I've written a code that splits the message into chunks of 160 character messages and send them as separate SMS per chunk. Management however does not like this design as it looks quite messy to be honest.
Is there any way I can get around this issue and send more than 160 characters on a single SMS message?
The 160 character limit is a hard limit imposed by the protocol definition of how a phone exchanges SMS messages with the network. There is however a possibility for the sending phone to split a long message up into multiple parts that are sent (and billed) separately but marked in such a way that the receiving phone are able to concatenate them back into one large message that is presented to the user, so that there is virtual support for sending large messages (Multi-Part is the technical term).
You do not say if you are sending messages in text of PDU mode with AT+CMGS, but I am guessing text mode and as far as I know it does not support this so you have to use PDU mode (related answer).
Related
I am very new to SMS and stumbled upon JamaaSMPP. But I am having trouble with regards to sending a message that is longer than 160 characters. I've read that I need to send or use PDU? But I am not sure how and what to do.
I'm currently learning how to use sockets on c#, and have a question regarding how the messages should be between the client and the server.
Currently i have a server application and a client application, and in each application i have some strings that are the commands. When, for example, the client needs the time from the server, i have a string like this:
public const string GET_TIME_COMMAND = "<GET_TIME_COMMAND>";
Then i have a if statement on the server, thats checks if the message sent from the client starts with that string and if so, it sends another message to the client with another command and with the time in a json string.
My question is, is this a good way to do it, and if not could you advise me on another way to go about this?
TCP
Keep in mind that TCP is a stream based connection. You may or may not get the complete command in one message. You may even get multiple commands in one read.
To solve this TCP messages usually have a unique start and stop sequence or byte that may not be part of the message.
(SomeCommand)
Where ( is the start and ) is the stop symbol.
An alternative way os to prepend a header to the actual message that contains the message length.
11 S O M E M E S S A G E
Where 11 is the message length and somemessage is the actual message. You'd usually transmit the length as a byte or ushort, not a string literal.
In both cases you have to read over and over until you have one complete message - then you can dispatch it into the application.
Also TCP is connection based. You have to connect to the remote site. The advantage is that TCP makes sure that all messages are sent in the very order you put them in. TCP will also automatically re-send lost packets and you don't have to worry about that.
UDP
In contrast to that UDP is a message/packet based, but it is not reliable. You may or may not get the message and have to re-send it in some cases. Also UDP doesn't have a notion of a "session". You would have to do that yourself if required.
The answer to your question depends on the protocol used. For TCP this won't work well with your current message format. You'd probably have to prepend a header.
You could use UDP, but then you may have to detect and re-send messages that got lost.
In my project, I want to send a unicoded text(UTF-8) SMS message through PDU-Submit. I've been searching a lot but all answers using Text-Mode and not PDU-Submit command, therefore I can't send multipart SMS. I want to have a solution for multi-part unicode messages.
Finally I have found the answer and use it.my program works fine. Sending a Concatenated(Multi-Part) SMS in Unicode Format Using PDU, is the same sending a simple septet-character SMS using AT+CMGS Command except you must set the DCS byte to 08.You can earn more Info on these threads:
Add UDH for concatenated Unicode SMS
http://en.wikipedia.org/wiki/Concatenated_SMS#PDU_Mode_SMS
You can send SMS messages with the AT+CMGS command in PDU mode (enable with AT+CMGF=0). The syntax (for pdu mode) is
AT+CMGS=<length><CR>
PDU is given<ctrl-Z/ESC>
I do not know if you are supposed to split into multipart yourself and send each part separately or if this command does that for you. I think maybe the latter, the description of the command says
Execution command sends message from a TE to the network (SMS-SUBMIT).
If/when you find out, feel free to update this answer with regards to that.
I've developed a nice free game for Windows Phone 7, which is called Domination, and which is, despite the early release, quite a success!
Now, I'm developing an Online Multiplayer Version, which regards interesting features, and now that I've almost reached the end, I'm encountering a BIG problem.
WEIRD packet loss, or something like that.
I've a sample for reproducing the problem.
I've a Server.
I've a Win Form Client
I've a XNA Client
steps to reproduce the problem:
1) you start the server, the win form and the game ( you need an emulator and WP7 SDK )
2) now, you press the GO button, and the form will open the TCP channel to server
3) now, you press the screen on emulator, and the form will open the TCP channel to server
4) now, each time you press the screen emulator, or the GO button on win form, the server will send you back 50 messages on proper client
well, the problem is that
1) win form usually receives 50 messages, RARELY loses 10 packets on one communication, but it's RARE
2) emulator, ALWAYS loses 30-40-45 messages!!!!!
I've tried other ways, but nothing changed..
one tip, if i put a Thread.Sleep(10) which 10 is 10 milliseconds, for each Server Send, it works perfect!!
Can anyone help me please? I just don't know where to put my head!
samples can be found here:
http://uploading.com/files/d7e7939c/Projects.zip/
No messages are being lost. They are all being received. Your code is just not correctly interpreting the received data. If you look at the number of bytes received, it will be correct. If you look at the data in the bytes received, it will be correct. The rest is up to your code.
TCP provides a byte stream service. That means you get out the same bytes you send. If you need to "glue" those bytes together into messages, then you must write code to do that. If you send 30 bytes, you might receive 30 bytes, or 10 bytes and then 20 bytes, or 1 byte 30 times, or anything in-between. If you send 5 bytes and then 3 bytes, you might receive 5 bytes and then 3 bytes. Or you might receive 8 bytes. Or you might receive 1 byte 8 times.
You must define a message format. You must code the sender to send messages in that format. You must code the receiver to identify when it has received a complete message.
What's happening is that you are sending "FOO" and then "BAR" and receiving "FOOBAR". Nothing has been lost in this process except the boundary between "FOO" and "BAR". However, TCP has no concept of a message boundary. If you need them, you must code them. For example, you could send "3FOO3BAR". Now, no matter how that gets received, the receiver knows that "FOO" is one message and "BAR" is another. Another choice is "foo\nbar\n" (foo newline bar newline). The receiver knows that it has a complete message when it receives a newline character.
But however you define an application-level message, you must actually write the code to do it. It won't happen by itself.
I'm having an issue using Sockets in c#. Heres an example. Say I send the number 1, then I immediately send the number 2. The problem I run into sometimes is the client that is supposed to receive it will receive a packet containing '12'. I was wondering if there was a built in way to distinguish packets without using characters or something to separate the data.
To sum it up again, I said two packets. One with the number '1', one with the number '2'.
Server receives 1 packet with the data '12'.
I don't want to separate the packets with characters, like this ':1::2:' or anything like that, as I don't always have control over the format of the incoming data.
Any ideas?
Like if I do this
client.Send(new byte[1]{'1'}, 1,SocketFlags.None);
client.Send(new byte[1]{'2'}, 1,SocketFlags.None);
then on the server end
byte[] data = new byte[1024];
client.Receive(data);
data sometimes comes back with "12" even though I do two separate sends.
TCP/IP works on a stream abstraction, not a packet abstraction.
When you call Send, you are not sending a packet. You are only appending bytes to a stream.
When you call Receive, you are not receiving a packet. You are only receiving bytes from a stream.
So, you must define where a "message" begins and ends. There are only three possible solutions:
Every message has a known size (e.g., all messages are 42 bytes). In this case, you just keep Receiveing the data until you get that many bytes.
Use delimiters, which you're already aware of.
Use length prefixing (e.g., prefix each message with a 4-byte length-of-message). In this case, you keep Receiveing the length prefix until it arrives, decode it to get the actual length of the message, and then keep Receiveing the message itself.
You must do the message framing yourself. TCP/IP cannot do it for you.
TCP is a streaming protocol, so you will always receive whatever data has arrived since the last read, up to the streaming window size on the recipient's end. This buffer can be filled with data received from multiple packets of any given size sent by the sender.
TCP Receive Window Size and Scaling # MSDN
Although you have observed 1 unified blob of data containing 2 bytes on the receiving end in your example, it is possible to receive sequences of 1 byte by 1 byte (as sent by your sender) depending on network conditions, as well as many possible combinations of 0, 1, and 2 byte reads if you are doing non-blocking reads. When debugging on a typical uncongested LAN or a loopback setup, you will almost never see this if there is no delay on the sending side. There are ways, at lower levels of the network stack, to detect per-packet transmission but they're not used in typical TCP programming and are out of application scope.
If you switch to UDP, then every packet will be received as it was sent, which would match your expectations. This may suit you better, but keep in mind that UDP has no delivery promises and network routing may cause packets to be delivered out of order.
You should look into delimiting your data or finding some other method to detect when you have reached the end of a unit of data as defined by your application, and stick with TCP.
Its hard to answer your question without knowing the context. In contrast, by default, TCP/IP handles packet management for you automaticly (although you receive it in a streambased fashion). However when you have a very specific (bad?) implementation, you can send multiple streams over 1 socket at the same time making it impossible for lower level TCP/IP to detect the difference. Thereby making it very hard for yourself to identify different streams on the client. The only solution for this would be to send 2 totally unique streams (e.g. stream 1 only sends bytes lower then 127 and stream 2 only sends bytes higher or equal to 127). Yet again, this is terrible behavior
You must put into your TCP messages delimiters or use byte counts to tell where one message starts and another begins.
Your code has another serious error. TCP sockets may not give you all the data in a single call to Receive. You must loop on receiving of data until your application-specific records in the data stream indicate the entire message has been received. Your call to client.Receive(data) returns the count of bytes received. You should capture that number.
Likewise, when you send data, all of your data might not be sent in a single call. You must loop on sending of data until the count of bytes sent is equal to what you intended to send. The call to client.Send returns the actual count of bytes sent, which may not be all you tried to send!
The most common error I see people make with sockets is that they don't loop on the Send and Receive. If you understand why you need to do the looping, then you know why you need to have a delimiter or a byte count.