JSON parsing trouble - c#

WebClient client = new WebClient();
Stream stream = client.OpenRead(" some link ");
StreamReader reader = new StreamReader(stream);
Newtonsoft.Json.Linq.JObject jObject = Newtonsoft.Json.Linq.JObject.Parse(reader.ReadLine());
List<String> list = new List<string>();
//loading list
for (int i = 0; i < ((string)jObject["some_stream"][i]["some_channel"]["some_name"]).Count(); i++)
{
string result = ((string)jObject["some_streams"][i]["some_channel"]["some_name"]);
list.Insert(i, result);
}
stream.Close();
This code is working, but in json data I have 20+ results should be returned, but I only get 8.
What could be the cause?

you are counting the length of a string. at some point the length of that string is equal to or less than i (the 9th value of the string if you manage to iterate 8 times)
That is this piece of code
((string)jObject["some_stream"][i]["some_channel"]["some_name"]).Count()
returns the length of a string at location i so if you manage to iterate 8 times then the string at jObject["some_stream"][9]["some_channel"]["some_name"] has a length of 9 or less at which time the looping ends
From the usage it looks like jObject["Some_stream"] returns an array in that case what you could do is something like this:
var arr = (Treal[])jObject["Some_stream"];
var list = (from obj in arr
select ((string)obj["some_channel"]["some_name"])).ToList();
you will need to substitue TReal with the actual type of jObject["Some_stream"]
aside: when ever you are opening a stream it's a good idea to do this within a using statement. In your code the stream would not be closed in the case of an exception
the code would then be
WebClient client = new WebClient();
using(var stream = client.OpenRead(" some link ")) {
reader = new StreamReader(stream);
var jObject = Newtonsoft.Json.Linq.JObject.Parse(reader.ReadLine());
var arr = (Treal[])jObject["Some_stream"];
var list = (from obj in arr
select ((string)obj["some_channel"]["some_name"])).ToList();
}

Related

As3 bytearray to c# bytearray

I'm looking to convert this piece of code to c#:
var local1:ByteArray= new ByteArray();
var auth:String = root.loaderInfo.parameters.auth as String;
var key0:String = root.loaderInfo.parameters.key0 as String;
var key1:String = root.loaderInfo.parameters.key1 as String;
var key2:String = root.loaderInfo.parameters.key2 as String;
var key3:String = root.loaderInfo.parameters.key3 as String;
local1.writeUnsignedInt(parse(auth));
local1.writeUnsignedInt(parse(key0));
local1.writeUnsignedInt(parse(key1));
local1.writeUnsignedInt(parse(key2));
local1.writeUnsignedInt(parse(key3));
trace(local1)
You see how I directly print the byte array without converting it to a string. How can you do that in c#? Is suppose to print out something like this: TV˜ 3 R j i
If the array contains something that can be interpreted as character codes, then you can decode the bytes into text. For example:
string localText = Encoding.Default.GetString(local1);
The encoding to use would depend on how the text was converted to bytes in the first place.
You can use a MemoryStream and a BinaryWriter to put the integers in an array. Example:
string auth = "1"; // example data, would come from your object
string key0 = "2";
byte[] local1;
using (MemoryStream m = new MemoryStream()) {
using (BinaryWriter w = new BinaryWriter(m)) {
w.Write(Int32.Parse(auth));
w.Write(Int32.Parse(key0));
}
local1 = m.ToArray();
}
foreach(var g in local1)
{
Console.WriteLine((char)g);
}

Multi-level sorting on strings

Here is a same of the raw data i have:
Sana Paden,1098,64228,46285,2/15/2011
Ardelle Mahr,1242,85663,33218,3/25/2011
Joel Fountain,1335,10951,50866,5/2/2011
Ashely Vierra,1349,5379,87475,6/9/2011
Amado Loiacono,1406,62789,38490,7/17/2011
Joycelyn Dolezal,1653,14720,13638,8/24/2011
Alyse Braunstein,1657,69455,52871,10/1/2011
Cheri Ravenscroft,1734,55431,58460,11/8/2011
i used a Filestream with a nested Streamwriter to determine first, how many lines are in the file, 2 to create an array of longs that give me the start of every line in the file. Code and out put follows:
using (FileStream fs = new FileStream(#"C:\SourceDatatoedit.csv", FileMode.Open, FileAccess.Read))
{
fs.Seek(offset, SeekOrigin.Begin);
StreamReader sr = new StreamReader(fs);
{
while (!sr.EndOfStream && fs.CanRead)
{
streamsample = sr.ReadLine();
numoflines++;
}// end while block
}//end stream sr block
long[] dataArray = new long[numoflines];
fs.Seek(offset, SeekOrigin.Begin);
StreamReader dr = new StreamReader(fs);
{
numoflines = 0;
streamsample = "";
while (!dr.EndOfStream && fs.CanRead)
{
streamsample = dr.ReadLine();
//pointers.Add(numoflines.ToString());
dataArray[numoflines] = offset;
offset += streamsample.Length - 1;
numoflines++;
}// end while
one string contains name, ID, a loan amount, a payment amount and the payment date.
i have a method in place to return the remaining amount by subtracting the payment amount from the loan amount and then dividing that by 100 to get the dollar and cents value.
after doing this i want to order my information by Date, name, and then lastly negative amounts first. i understand i could create a loan class then create a list of loan objects and run Linq for Objects query against the set to obtain this but im trying to do this without the use of Linq....any suggestions?
Depending on the context for your code, you can gain many benefits by introducing a custom class / business object. It will help you provide a good separation of concerns in your code, and thus move to more manageable and testable code. You can implement the IComparable interface so that you can invoke a custom Sort on a collection of type List.
I know you mentioned not to use LINQ. However, you could use one line of code for this lines of code here:
using (FileStream fs = new FileStream(#"C:\SourceDatatoedit.csv", FileMode.Open, FileAccess.Read))
{
fs.Seek(offset, SeekOrigin.Begin);
StreamReader sr = new StreamReader(fs);
{
while (!sr.EndOfStream && fs.CanRead)
{
streamsample = sr.ReadLine();
numoflines++;
}// end while block
}//end stream sr block
}
To this one line of code like this:
int numoflines = File.ReadLines("SourceDatatoedit.csv").
Select(line => line.Split(',')).ToList().Count;
Or you could even just get the List like:
var lines = File.ReadLines("SourceDatatoedit.csv").
Select(line => line.Split(',')).ToList();
And get the number of lines afterward
numoflines = lines.Count;
And then continue with your code that you have like:
long[] dataArray = new long[numoflines];
fs.Seek(offset, SeekOrigin.Begin);
StreamReader dr = new StreamReader(fs);
{
numoflines = 0;
streamsample = "";
while (!dr.EndOfStream && fs.CanRead)
{
streamsample = dr.ReadLine();
//pointers.Add(numoflines.ToString());
dataArray[numoflines] = offset;
offset += streamsample.Length - 1;
numoflines++;
}// end while
Or just use the List obtained above and work with it like creating an IComparable implementation as #sfuqua suggested above.

deciphering the message that was sent over LAN

I'm using TCPClient to send message (List of arrays) over LAN, so I have separated:
array elements with combination: string arr_sep = "[s{p(a)c}e]";
list elements with combination: string list_sep = "[n|e|w)";
How to decipher the following string line to List<string[]> using the regex?
string tessst = "abra[s{p(a)c}e]kada[s{p(a)c}e]bra[n|e|w)hel[s{p(a)c}e]oww[s{p(a)c}e]een";
Here is what I tried to do:
string tessst = "abra[s{p(a)c}e]kada[s{p(a)c}e]bra[n|e|w)hel[s{p(a)c}e]oww[s{p(a)c}e]een";
List<string[]> splited2 = new List<string[]>();
if (tessst.Length > 0)
{
List<string> splited1 = new List<string>(Regex.Split(tessst, "[^a-zA-Z]+")); //[s{p(a)c}e]
for (int i = 0; i < splited1.Count; i++)
{
splited2.Add(Regex.Split(splited1[i], "[^a-zA-Z]+")); // [n|e|w)
}
}
//splited2 is the result!
Unfortunately, Regex is completely broken - how do I fix it? Is there a better approach maybe?
Expected result:
List<string[]> result = new List<string[]>();
result.Add(new string[]{"abra", "kada", "bra"});
result.Add(new string[]{"hel", "oww", "een"});
EDIT: fix
When I receive the data - I normally limit the bytes to 1024, however that's not enough to get all 50 entries of List<string[]>!
I increased the number of bytes up to 10000 and now all info goes through LAN! It takes 3499 bytes to serialize 50 string[] of List<string[]>. In the future I will be using up to 900 entries in my List, so it is safe to assume that I will need:
(3499/50)*900 = 63000 bytes to serialize my data!!
the question is - is it safe/secure to send that must data at once? Here is the code that I use to receive:
string message = "";
int thisRead = 0;
int max = 10000; // from 1024 to 10000
Byte[] dataByte = new Byte[max];
using (var strm = new MemoryStream())
{
thisRead = Nw.Read(dataByte, 0, max);
strm.Write(dataByte, 0, thisRead);
strm.Seek(0, SeekOrigin.Begin);
using (StreamReader reader = new StreamReader(strm))
{
message = reader.ReadToEnd();
}
}
List<string[]> result = new JavaScriptSerializer().Deserialize<List<string[]>>(message );
And that's to send:
List<string[]> list= new List<string[]>();
list = browser_ex.GetMusicListSer(); // 50 list elements
string text_message = new JavaScriptSerializer().Serialize(list);
MemoryStream Fs = new MemoryStream(ASCIIEncoding.Default.GetBytes(text_message));
Byte[] buffer = Fs.ToArray();
Nw.Write(buffer, 0, buffer.Length); // 3499 bytes
Can I increase the maximum amount of bytes to 100 thousands and forget about this problem once and for all? There should be another solution... i believe.
Instead of reinventing the wheel, use serialization
You have many alternatives for this (JavaScriptSerializer, DataContractSerializer, DataContractJsonSerializer, BinaryFormatter, SoapFormatter, XmlSerializer).
List<string[]> list = new List<string[]>();
list.Add(new string[] { "abra", "kada", "bra" });
list.Add(new string[] { "hel", "oww", "een" });
string stringToSend = new JavaScriptSerializer().Serialize(list);
//Send
string receivedString = stringToSend;
List<string[]> result = new JavaScriptSerializer()
.Deserialize<List<string[]>>(receivedString);
»EDIT«
Assuming Nw is NetworkStream, your code can be as simple as like this:
//Receiver
StreamReader reader = new StreamReader(Nw);
while (true)
{
List<string[]> result = new JavaScriptSerializer()
.Deserialize<List<string[]>>(reader.ReadLine());
//do some work with "result"
}
//Sender
StreamWriter writer = new StreamWriter(Nw);
while (true)
{
//form your "list" and send
writer.WriteLine(new JavaScriptSerializer().Serialize(list));
writer.Flush();
}
string tessst = "abra[s{p(a)c}e]kada[s{p(a)c}e]bra[n|e|w)hel[s{p(a)c}e]oww[s{p(a)c}e]een";
List<string[]> splited2 = new List<string[]>();
if (tessst.Length > 0)
{
List<string> splited1 = new List<string>(Regex.Split(tessst, #"\[n\|e\|w\)"));
for (int i = 0; i < splited1.Count; i++)
{
splited2.Add(Regex.Split(splited1[i], #"\[s\{p\(a\)c\}e\]"));
}
}
This will give you the desired output you described.
If you're sure that you don't want to use any existing protocol or technology, you can easily invent your own protocol using the BinaryWriter and BinaryReader classes:
Sender:
using (var writer = new BinaryWriter(networkStream))
{
foreach (var array in list)
{
writer.Write(array.Length);
for (int i = -; i < array.Length; i++)
{
writer.Write(array[i]);
}
}
writer.Write(0);
writer.Flush();
}
Receiver:
using (var reader = new BinaryReader(networkStream))
{
var list = new List<string[]>();
int length;
while ((length = reader.ReadInt32()) != 0)
{
var array = new string[length];
for (int i = 0; i < array.Length; i++)
{
array[i] = reader.ReadString();
}
list.Add(array);
}
}

Read textfile from specific position till specific length

Due to me receiving a very bad datafile, I have to come up with code to read from a non delimited textfile from a specific starting position and a specific length to buildup a workable dataset. The textfile is not delimited in any way, but I do have the starting and ending position of each string that I need to read. I've come up with this code, but I'm getting an error and can't figure out why, because if I replace the 395 with a 0 it works..
e.g. Invoice number starting position = 395, ending position = 414, length = 20
using (StreamReader sr = new StreamReader(#"\\t.txt"))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[20];//Invoice number string
sr.Read(c, 395, c.Length); //THIS IS GIVING ME AN ERROR
Debug.WriteLine(""+c[0] + c[1] + c[2] + c[3] + c[4]..c[20]);
}
}
Here is the error that I get:
System.ArgumentException: Offset and length were out of bounds for the array
or count is greater than the number of elements from
index to the end of the source collection. at
System.IO.StreamReader.Read(Char[] b
Please Note
Seek() is too low level for what the OP wants. See this answer instead for line-by-line parsing.
Also, as Jordan mentioned, Seek() has the issue of character encodings and varying character sizes (e.g. for non-ASCII and non-ANSI files, like UTF, which is probably not applicable to this question). Thanks for pointing that out.
Original Answer
Seek() is only available on a stream, so try using sr.BaseStream.Seek(..), or use a different stream like such:
using (Stream s = new FileStream(path, FileMode.Open))
{
s.Seek(offset, SeekOrigin.Begin);
s.Read(buffer, 0, length);
}
Here is my suggestion for you:
using (StreamReader sr = new StreamReader(#"\\t.txt"))
{
char[] c = new char[20]; // Invoice number string
sr.BaseStream.Position = 395;
sr.Read(c, 0, c.Length);
}
(new answer based on comments)
You are parsing invoice data, with each entry on a new line, and the required data is at a fixed offset for every line. Stream.Seek() is too low level for what you want to do, because you will need several seeks, one for every line. Rather use the following:
int offset = 395;
int length = 20;
using (StreamReader sr = new StreamReader(#"\\t.txt"))
{
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
string myData = line.Substring(offset, length);
}
}
Solved this ages ago, just wanted to post the solution that was suggested
using (StreamReader sr = new StreamReader(path2))
{
string line;
while ((line = sr.ReadLine()) != null)
{
dsnonhb.Tables[0].Columns.Add("InvoiceNum" );
dsnonhb.Tables[0].Columns.Add("Odo" );
dsnonhb.Tables[0].Columns.Add("PumpVal" );
dsnonhb.Tables[0].Columns.Add("Quantity" );
DataRow myrow;
myrow = dsnonhb.Tables[0].NewRow();
myrow["No"] = rowcounter.ToString();
myrow["InvoiceNum"] = line.Substring(741, 6);
myrow["Odo"] = line.Substring(499, 6);
myrow["PumpVal"] = line.Substring(609, 7);
myrow["Quantity"] = line.Substring(660, 6);
I've created a class called AdvancedStreamReader into my Helpers project on git hub here:
https://github.com/jsmunroe/Helpers/blob/master/Helpers/IO/AdvancedStreamReader.cs
It is fairly robust. It is a subclass of StreamReader and keeps all of that functionality intact. There are a few caveats: a) it resets the position of the stream when it is constructed; b) you should not seek the BaseStream while you are using the reader; c) you need to specify the newline character type if it differs from the environment and the file can only use one type. Here are some unit tests to demonstrate how it is used.
[TestMethod]
public void ReadLineWithNewLineOnly()
{
// Setup
var text = $"ƒun ‼Æ¢ with åò☺ encoding!\nƒun ‼Æ¢ with åò☺ encoding!\nƒun ‼Æ¢ with åò☺ encoding!\nHa!";
var bytes = Encoding.UTF8.GetBytes(text);
var stream = new MemoryStream(bytes);
var reader = new AdvancedStreamReader(stream, NewLineType.Nl);
reader.ReadLine();
// Execute
var result = reader.ReadLine();
// Assert
Assert.AreEqual("ƒun ‼Æ¢ with åò☺ encoding!", result);
Assert.AreEqual(54, reader.CharacterPosition);
}
[TestMethod]
public void SeekCharacterWithUtf8()
{
// Setup
var text = $"ƒun ‼Æ¢ with åò☺ encoding!{NL}ƒun ‼Æ¢ with åò☺ encoding!{NL}ƒun ‼Æ¢ with åò☺ encoding!{NL}Ha!";
var bytes = Encoding.UTF8.GetBytes(text);
var stream = new MemoryStream(bytes);
var reader = new AdvancedStreamReader(stream);
// Pre-condition assert
Assert.IsTrue(bytes.Length > text.Length); // More bytes than characters in sample text.
// Execute
reader.SeekCharacter(84);
// Assert
Assert.AreEqual(84, reader.CharacterPosition);
Assert.AreEqual($"Ha!", reader.ReadToEnd());
}
I wrote this for my own use, but I hope it will help other people.
395 is the index in c array at which you start writing. There's no 395 index there, max is 19.
I would suggest something like this.
StreamReader r;
...
string allFile = r.ReadToEnd();
int offset = 395;
int length = 20;
And then use
allFile.Substring(offset, length)

Parsing UTF8 encoded data from a Web Service

I'm parsing the date from http://toutankharton.com/ws/localisations.php?l=75
As you can see, it's encoded (<name>Paris 2ème</name>).
My code is the following :
using (var reader = new StreamReader(stream, Encoding.UTF8))
{
var contents = reader.ReadToEnd();
XElement cities = XElement.Parse(contents);
var t = from city in cities.Descendants("city")
select new City
{
Name = city.Element("name").Value,
Insee = city.Element("ci").Value,
Code = city.Element("code").Value,
};
}
Isn't new StreamReader(stream, Encoding.UTF8) sufficient ?
That looks like something that happens if you take utf8-bytes and output them with a incompatible encoding like ISO8859-1. Do you know what the real character is? Going back, using ISO8859-1 to get a byte array, and UTF8 to read it, gives "è".
var input = "è";
var bytes = Encoding.GetEncoding("ISO8859-1").GetBytes(input);
var realString = Encoding.UTF8.GetString(bytes);

Categories

Resources