I want to send SMS to bulk of users(4000 user) so i put the following method on loop :
protected int SendSMS(string url)
{
// Now to Send Data.
StreamWriter writer = null;
StringBuilder postData = new StringBuilder();
Uri myUri = new Uri(url);
postData.Append(HttpUtility.ParseQueryString(myUri.Query).Get("Username"));
postData.Append(HttpUtility.ParseQueryString(myUri.Query).Get("Password"));
postData.Append(HttpUtility.ParseQueryString(myUri.Query).Get("Sender"));
postData.Append(HttpUtility.ParseQueryString(myUri.Query).Get("Recipients"));
postData.Append(HttpUtility.ParseQueryString(myUri.Query).Get("MessageData"));
string webpageContent = string.Empty;
byte[] byteArray = Encoding.UTF8.GetBytes(postData.ToString());
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.Method = "POST";
webRequest.ContentType = "application/x-www-form-urlencoded";
webRequest.ContentLength = webRequest.ContentLength = byteArray.Length;
writer = new StreamWriter(webRequest.GetRequestStream());
try
{
using (Stream webpageStream = webRequest.GetRequestStream())
{
webpageStream.Write(byteArray, 0, byteArray.Length);
}
using (HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse())
{
using (StreamReader reader = new StreamReader(webResponse.GetResponseStream()))
{
webpageContent = reader.ReadToEnd();
//TODO:parse webpagecontent: iF response contains "OK"
if (webpageContent.Contains("OK")) return 1;
else return 0;
}
}
//return 1;
}
catch (Exception ee)
{
ErrMapping.WriteLog(url);
string error = ee.Message + "<br><br>Stack Trace : " + ee.StackTrace;
ErrMapping.WriteLog(error);
return -1;
}
}
After a specific number of users like 65 user, no sms had been sent for the rest of users and
I get the following exception :
Error Message:Thread was being aborted.<br><br>Stack Trace : at System.Net.UnsafeNclNativeMethods.OSSOCK.recv(IntPtr socketHandle, Byte* pinnedBuffer, Int32 len, SocketFlags socketFlags)
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, SocketError& errorCode)
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead)
at System.Net.ConnectStream.ProcessWriteCallDone(ConnectionReturnResult returnResult)
at System.Net.HttpWebRequest.CheckDeferredCallDone(ConnectStream stream)
at System.Net.HttpWebRequest.GetResponse()
at SendSMS_EmailUI.Frm_SMS_send.SendSMS(String url)
IMHO, the bulk operations that will be performed on behalf of the application can be done easily by the following procedure
When a bulk SMS is triggered, the details will be entered in a database table
A windows service will be constantly monitoring this table for any updates
When the windows service finds the new entries, it will take few records like few hundreds and then send them. Batch Processing.
There can be a delay between consequent requests.
This will ensure you to track which line items have failed and also does not clog the server with the bulk data.
This is a most widely suggested approach.
Please provide your comments on this implementation.
I have built an SMS portal. What you describe was also experienced in v1.0 of my application. The solution was to have my SMS gateway provide me with Bulk SMS access via HTTP. I could put up to 1000 destinations into an XML (or comma delimited) package and send to the Bulk SMS Gateway. Because I run on a shared host, I limited this to 500 destinations.
I have a cache/temporary storage/table where I batch large destination (up to 1,000,000) in some cases and a scheduler (timer based) sends each batch of 500 every few seconds (by calling a script repeatedly) until the messages are sent. Works like charm!
For personalized messages, I encourage the client to use my desktop application for personalization before forwarding to my SMS portal. Good luck
PROCESS:
You'll need three items
The Script that receives the SendSMS request
The script that sends the SMS
The Scheduler/Timer (Script/Host Service)
A. The Send SMS request arrives with
a. The Message and Sender ID/ Sender GSM Number.
b. The Destinations (as a comma delimited list). We'll assume 10,000 destinations
B. Split the destinations into 500 (any size you wish) and log the 500 destinations along with the message and SenderID in each INBOX row/record.
Note: if you count 500 out by looping (10,000 loops), the script could Time out. GSM Numbers in My country are 13 digits.
So I do a loop Sub String of length 500(13+1) to get 500 destinations per batch (20 loops).*
C. Call the Script that Sends the SMS. Sends the first 500 and tag the message as Sent. You can add Time Sent. Start the Scheduler
D. The Scheduler checks every 1.5 minutes if any unsent messages exist in the INBOX and sends it. If nothing, Scheduler Stops.
So, 10,000 messages are sent within 30 mintues
We do something similar to what saravan suggested for email messages, and suspect it will work SMS.
Basically the service that runs on our web server only sends x at a time, and there is a custom delay y between each send.
It can send two thousand in less than ten minutes with neither a CPU nor a bandwidth spike.
Some tips that weren't in our original design to keep in mind:
Have a way to manually stop all sending (see next tip)
Have a user friendly way to terminate a particular message. If your code (or the user) accidentally sends the same thing five times, you want a way to abort it, quickly.
Use a config file for both numbers (x & y above) so you can adjust without redeploying. I think the y delay is 50 ms.
Before you decide to bundle the same message to multiple recipients, make sure that smartphones can't reply to everyone else in their bundle.
HTH,
-Chris C.
I suspect the problem is that because you are on an asp.net website and this is a long-running task that it is timing out the response.
this timeout is controlled in the web.config and it's default is set to 110 seconds. Increase this to a much longer number and see if it starts working.
<system.web>
<!-- 600 seconds = 10 minute timeout --->
<httpRuntime executionTimeout="600"/>
</system.web>
A better approach would be to using a separate thread and returning updates of the progress
to the user, although this would be more complex, but ultimately more reliable.
See Parallel.ForEach for a simple way to thread this process.
Msdn - httpRuntime documention
This is not answering your original question, but is related useful info. Once you solve this problem, you may hit another -- telcos routinely block bulk-sent SMS messages as a way to suppress SMS spam. If you will be doing this on a commercial scale, you will need to file a brief with the Mobile Marketing Association, and get their approval, which can take a considerable amount of time.
check if your StreamWriter is getting disposed
using (StreamWriter writer = new StreamWriter(webRequest.GetRequestStream()))
{
}
May be the buffer value is full please check it and put it to maximum value in web-config file
Maybe your are making web request too fast: try slowing them down to rule out timing problems, for example adding a sleep:
System.Threading.Thread.Sleep(1000);
1000ms = 1 sec is only a hint, you should try different values to see if something changes
Related
I am writing a Small HttpServer, sometime I encounter a problem with missing POST Data.
By using Wireshark I discovered, that the Header is split into two segments.
I only get the first segment (636 Bytes), the second one (POST Data in this case) gets totally lost.
Here is a the relevant C# Code
string requestHeaderString = "";
StreamSocket socketStream = args.Socket;
IInputStream inputStream = socketStream.InputStream;
byte[] data = new byte[BufferSize];
IBuffer buffer = data.AsBuffer();
try
{
await inputStream.ReadAsync(buffer, BufferSize, InputStreamOptions.Partial);
// This is where things go missing, buffer.ToArray() should be 678 Bytes long,
// so Segment 1 (636 Bytes) and Segment 2 (42 Bytes) combined.
// But is only 636 Bytes long, so just the first Segment?!
requestHeaderString += Encoding.UTF8.GetString(buffer.ToArray());
}
catch (Exception e)
{
Debug.WriteLine("inputStream is not readable" + e.StackTrace);
return;
}
This code is in part of the StreamSocketListener ConnectionReceived Event.
Do I manually have to reassemble the TCP Segments, isn't this what the Systems TCP Stack should do?
Thanks,
David
The problem is the systems TCP stack treats the TCP stream just like any other stream. You don't get "messages" with streams, you just get a stream of bytes.
The receiving side has no way to tell when one "message" ends and where the next begins without you telling it some how. You must implement message framing on top of TCP, then on your receiving side you must repeatedly call Receive till you have received enough bytes to form a full message (this will involve using the int returned from the receive call to see how many bytes where processed).
Important note: If you don't know how many bytes you are expecting to get in total, for example you are doing message framing by using '\0' to seperate messages you may get the end of one message and the start of the next in a single Receive call. You will need to handle that situation.
EDIT: Sorry, I skipped over the fact you where reading HTTP. You must follow the protocol of HTTP. You must read in data till you see the pattern \r\n\r\n, once you get that you must parse the header and decode how much data is in the content portion of the HTTP message then repeatatly call read till you have read the number of bytes needed.
I'm writing a bot for moderating my twitch.tv channel in C#.
Here's the basic code for the loop, which is done by a background worker to avoid UI freezes. There's a TCPClient (Client), StreamReader (Reader), StreamWriter (Writer), and NetworkStream (Stream).
private void listener_dowork(object sender, DoWorkEventArgs e)
{
string Data = "";
while ((Data = Reader.ReadLine()) != null)
{
//Perform operations on the received data
}
Console.WriteLine("Loop ended");//this shouldn't happen
}
private void listener_workercompleted(object sender, RunWorkerCompletedEventArgs e)
{
//basically, display a console message that says "OOPS!" and try to reconnect.
}
I get the message "Loop ended" and "OOPS!" and at that point, I get the exception (which I cannot for the life of me catch).
The thing is, I can physically unplug the network cable from my computer wait 30 seconds and plug it back in, and it'll continue just fine.
The full exception is:
System.Net.Sockets.SocketException (0x80004005): An established connection was aborted by the software in your host machine
at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)
Note the lack of a line number, which is present in every other kind of exception I've had, which means I have no idea which part of the program is causing the exception, even though I've put every possible line inside a try/catch.
I guess what I'm looking for is some insight into why this is occurring.
It happens invariably every time I start the bot and leave it running for a few minutes on any channel, though the number of minutes varies.
As I already said in the comments Twitch.tv uses IRC as underlying system for their chat. In order to stay connected with the server you need to reply to "PING" requests that are frequently sent by the server (usually every 30 seconds, may vary depending on the servers implementation). You can read up more about the IRC client protocol in RFC 2812.
You said you already have a StreamWriter and Reader, all you need to do is check if the line contains "PING" and reply with a "PONG":
if (Data.Contains("PING"))
{
_streamWriter.WriteLine(Data.Replace("PING","PONG");
_streamWriter.Flush();
}
Program locks itself when it is downloading the file according to the code below: What might be the problem ?
if (bufferInfo.Contains("fileExists"))
{
FileStream downloadFileStream = new FileStream(folderName + "\\" + requestFileName.Text, FileMode.Create);
activityLog.AppendText("File is found, it will be downloaded !");
byte[] myReadBufferExists = new byte[8196];
do
{
bytesRead = clientSocket.Receive(myReadBufferExists);
downloadFileStream.Write(myReadBufferExists,0,bytesRead);
} while (bytesRead != 0);
downloadFileStream.Close();
clientSocket.Close();
bufferInfo.Replace("fileExists","");
activityLog.AppendText("File has been received now writing to the disk...");
}
It locks on the clientSocket.Receive(myReadBufferExists) call. This is because Receive, by default, will try to fill the buffer you pass it. If you endlessly call Receive, it will eventually block when there's no more data.
Couple options:
Before you start receiving, figure out the size beforehand, either in a request header or some other side-channel information. Only attempt to Receive that much data.
Have the sending software close the socket after sending the data. If this is from an http request, for instance, you can specify Connection: close in the request header. When they close the connection, Receive will pop out (via exception - connection closed by peer) and you'll know you're done.
This is hackish, but .. set a receive timeout that's larger than the time it takes to receive even the slowest chunk. After you hit the end of the stream, your receive call will timeout (via exception) and you'll know you're done. I don't recommend this way.
I am developing an app where I need to download a bunch of web pages, preferably as fast as possible. The way that I do that right now is that I have multiple threads (100's) that have their own System.Net.HttpWebRequest. This sort of works, but I am not getting the performance I would like. Currently I have a beefy 600+ Mb/s connection to work with, and this is only utilized at most 10% (at peaks). I guess my strategy is flawed, but I am unable to find any other good way of doing this.
Also: If the use of HttpWebRequest is not a good way to download web pages, please say so :)
The code has been semi-auto-converted from java.
Thanks :)
Update:
public String getPage(String link){
myURL = new System.Uri(link);
myHttpConn = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(myURL);
myStreamReader = new System.IO.StreamReader(new System.IO.StreamReader(myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).BaseStream,
new System.IO.StreamReader(myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).CurrentEncoding);
System.Text.StringBuilder buffer = new System.Text.StringBuilder();
//myLineBuff is a String
while ((myLineBuff = myStreamReader.ReadLine()) != null)
{
buffer.Append(myLineBuff);
}
return buffer.toString();
}
One problem is that it appears you're issuing each request twice:
myStreamReader = new System.IO.StreamReader(
new System.IO.StreamReader(
myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).BaseStream,
new System.IO.StreamReader(myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).CurrentEncoding);
It makes two calls to GetResponse. For reasons I fail to understand, you're also creating two stream readers. You can split that up and simplify it, and also do a better job of error handling...
var response = (HttpWebResponse)myHttpCon.GetResponse();
myStreamReader = new StreamReader(response.GetResponseStream(), Encoding.Default)
That should double your effective throughput.
Also, you probably want to make sure to dispose of the objects you're using. When you're downloading a lot of pages, you can quickly run out of resources if you don't clean up after yourself. In this case, you should call response.Close(). See http://msdn.microsoft.com/en-us/library/system.net.httpwebresponse.close.aspx
I am adding this answer as another possibility which people may encounter when
downloading from multiple servers using multi-threaded apps
using Windows XP or Vista as the operating system
The tcpip.sys driver for these operating systems has a limit of 10 outbound connections per second. This is a rate limit, not a connection limit, so you can have hundreds of connections, but you cannot initiate more than 10/s. The limit was imposed by Microsoft to curtail the spread of certain types of virus/worm. Whether such methods are effective is outside the scope of this answer.
In a multi-threaded application that downloads from multitudes of servers, this limitation can manifest as a series of timeouts. Windows puts into a queue all of the "half-open" (newly open but not yet established) connections once the 10/s limit is reached. In my application, for example, I had 20 threads ready to process connections, but I found that sometimes I would get timeouts from servers I knew were operating and reachable.
To verify that this is happening, check the operating system's event log, under System. The error is:
EventID 4226: TCP/IP has reached the security limit imposed on the number of concurrent TCP connect attempts.
There are many references to this error and plenty of patches and fixes to apply to remove the limit. However because this problem is frequently encountered by P2P (Torrent) users, there's quite a prolific amount of malware disguised as this patch.
I have a requirement to collect data from over 1200 servers (that are actually data sensors) on 5-minute intervals. I initially developed the application (on WinXP) to reuse 20 threads repeatedly to crawl the list of servers and aggregate the data into a SQL database. Because the connections were initiated based on a timer tick event, this error happened often because at their invocation, none of the connections are established, thus 10 are immediately queued.
Note that this isn't a problem necessarily, because as connections are established, those queued are then processed. However if non-queued connections are slow to establish, that time can negatively impact the timeout limits of the queued connections (in my experience). The result, looking at my application log file, was that I would see a batch of connections that timed out, followed by a majority of connections that were successful. Opening a web browser to test "timed out" connections was confusing, because the servers were available and quick to respond.
I decided to try HEX editing the tcpip.sys file, which was suggested on a guide at speedguide.net. The checksum of my file differed from the guide (I had SP3 not SP2) and comments in the guide weren't necessarily helpful. However, I did find a patch that worked for SP3 and noticed an immediate difference after applying it.
From what I can find, Windows 7 does not have this limitation, and since moving the application to a Windows 7-based machine, the timeout problem has remained absent.
I do this very same thing, but with thousands of sensors that provide XML and Text content. Factors that will definitely affect performance are not limited to the speed and power of your bandwidth and computer, but the bandwidth and response time of each server you are contacting, the timeout delays, the size of each download, and the reliability of the remote internet connections.
As comments indicate, hundreds of threads is not necessarily a good idea. Currently I've found that running between 20 and 50 threads at a time seems optimal. In my technique, as each thread completes a download, it is given the next item from a queue.
I run a custom ThreaderEngine Class on a separate thread that is responsible for maintaining the queue of work items and assigning threads as needed. Essentially it is a while loop that iterates through an array of threads. As the threads finish, it grabs the next item from the queue and starts the thread again.
Each of my threads are actually downloading several separate items, but the method call is the same (.NET 4.0):
public static string FileDownload(string _ip, int _port, string _file, int Timeout, int ReadWriteTimeout, NetworkCredential _cred = null)
{
string uri = String.Format("http://{0}:{1}/{2}", _ip, _port, _file);
string Data = String.Empty;
try
{
HttpWebRequest Request = (HttpWebRequest)WebRequest.Create(uri);
if (_cred != null) Request.Credentials = _cred;
Request.Timeout = Timeout; // applies to .GetResponse()
Request.ReadWriteTimeout = ReadWriteTimeout; // applies to .GetResponseStream()
Request.Proxy = null;
Request.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
using (HttpWebResponse Response = (HttpWebResponse)Request.GetResponse())
{
using (Stream dataStream = Response.GetResponseStream())
{
if (dataStream != null)
using (BufferedStream buffer = new BufferedStream(dataStream))
using (StreamReader reader = new StreamReader(buffer))
{
Data = reader.ReadToEnd();
}
}
return Data;
}
}
catch (AccessViolationException ave)
{
// ...
}
catch (Exception exc)
{
// ...
}
}
Using this I am able to download about 60KB each from 1200+ remote machines (72MB) in less than 5 minutes. The machine is a Core 2 Quad with 2GB RAM and utilizes four bonded T1 connections (~6Mbps).
I am connecting to my mail server using IMAP and Telnet. Once I am connected I am marking all items in the inbox as read. Some times the inbox will only have a couple of e-mails, sometimes the inbox may have thousands of e-mails. I am storing the response from the server into a Byte array, but the Byte array has a fixed length.
Private client As New TcpClient("owa.company.com", 143)
Private data As [Byte]()
Private stream As NetworkStream = client.GetStream()
.
. some code here generates a response that I want to read
.
data = New [Byte](1024) {}
bytes = stream.Read(data, 0, data.Length)
But the response from the server varies based on how many e-mails are successfully marked as read since I get one line of confirmation for each e-mail processed. There are times where the response may contain only 10-20 lines, other times it will contain thousands of lines. Is there any way for me to be able to get the response from the server in its entirety? I mean it seems like I would have to know when the server was done processing my request, but I'm not sure how to go about accomplishing this.
So to reiterate my question is: How can I check in my program to see when the server is done processing a response?
I believe you can use the NetworkStream's DataAvailable property:
if( stream.CanRead)
{
do{
bytes = stream.Read(data, 0, data.Length);
//append the data read to wherever you want to hold it.
someCollectionHoldingTheFullResponse.Add( data);
} while( stream.DataAvailable);
}
At the end, "someCollectionHoldingTheFullResponse" (memory stream? string? List<byte>? up to your requirements) would hold the full response.
Why not just check the unread mail count? If there are no unread mail, then all have been marked as unread :)
This article has an interesting example of C# code communicating over TCP to a server. It shows how to use a While loop to wait until the server has sent over all data over the wire.
Concentrate on the HandleClientComm() routine, since this some code you may wish to use.