I am processing approximately 19,710 directories containing IIS log files in an Azure Synapse Spark notebook. There are 3 IIS log files in each directory. The notebook reads the 3 files located in the directory and converts them from text delimited to Parquet. No partitioning. But occasionally I get the following two errors for no apparent reason.
{
"errorCode": "2011",
"message": "An error occurred while sending the request.",
"failureType": "UserError",
"target": "Call Convert IIS To Raw Data Parquet",
"details": []
}
When I get the error above all of the data was successfully written to the appropriate folder in Azure Data Lake Storage Gen2.
sometimes I get
{
"errorCode": "6002",
"message": "(3,17): error CS0234: The type or namespace name 'Spark' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)\n(4,17): error CS0234: The type or namespace name 'Spark' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)\n(12,13): error CS0103: The name 'spark' does not exist in the current context",
"failureType": "UserError",
"target": "Call Convert IIS To Raw Data Parquet",
"details": []
}
When I get the error above none of the data was successfully written to the appropriate folder in Azure Data Lake Storage Gen2.
In both cases you can see that the notebook did run for a period of time.
I have enabled 1 retry on the spark notebook, it is a pyspark notebook that does python for the parameters with the remainder of the logic using C# %%csharp. The spark pool is small (4 cores/ 32GB) with 5 nodes.
The only conversion going on in the notebook is converting a string column to a timestamp.
var dfConverted = dfparquetTemp.WithColumn("Timestamp",Col("Timestamp").Cast("timestamp"));
When I say this is random the pipeline is currently running and after processing 215 directories there are 2 of the first failure and one of the second.
Any ideas or suggestions would be appreciated.
OK after running for 113 hours (its almost done) I am still getting the following errors but it looks like all of the data was written out
Count 1
{
"errorCode": "6002",
"message": "(3,17): error CS0234: The type or namespace name 'Spark' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)\n(4,17): error CS0234: The type or namespace name 'Spark' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)\n(12,13): error CS0103: The name 'spark' does not exist in the current context",
"failureType": "UserError",
"target": "Call Convert IIS To Raw Data Parquet",
"details": []
}
Count 1
{
"errorCode": "6002",
"message": "Exception: Failed to create Livy session for executing notebook. LivySessionId: 4419, Notebook: Convert IIS to Raw Data Parquet.\n--> LivyHttpRequestFailure: Something went wrong while processing your request. Please try again later. HTTP status code: 500. Trace ID: e0860852-40e6-498f-b2df-4eff9fee504a.",
"failureType": "UserError",
"target": "Call Convert IIS To Raw Data Parquet",
"details": []
}
Count 17
{
"errorCode": "2011",
"message": "An error occurred while sending the request.",
"failureType": "UserError",
"target": "Call Convert IIS To Raw Data Parquet",
"details": []
}
Not sure what these errors are about and of course I will rerun the specific data in the pipeline to see if this is a one-off or keeps occurring on this specific data. But it seems as if these errors or occurring after the data as been written to parquet format.
Well I think this is part of the issue. Keep in mind that I am writing the main part of the logic in C# so your mileage in another language may vary. Also these are IIS log files that are space delimited and they can be multiple megabytes in size like one file could be 30MB.
My new code has been running for 17 hours without a single error. All of the changes I made were to ensure that I disposed of resources that would consume memory. Examples follow:
When reading a text delimited file as a binary file
var df = spark.Read().Format("binaryFile").Option("inferSchema", false).Load(sourceFile) ;
byte[] rawData = df.First().GetAs<byte[]>("content");
the data in the byte[] eventually gets loaded into a List<GenericRow> but I never set the variable rawData to null.
After filling the byte[] from data frame above I added
df.Unpersist() ;
After fully putting all data into List<GenericRow> rows from the byte[] and adding it into a data frame using the code below I cleared out the rows variable.
var dfparquetTemp = spark.CreateDataFrame(rows,inputSchema);
rows.Clear() ;
finally after changing a column type and writing out the data I did an unpersist on the data frame.
var dfConverted = dfparquetTemp.WithColumn("Timestamp",Col("Timestamp").Cast("timestamp"));
if(overwrite) {
dfConverted.Write().Mode(SaveMode.Overwrite).Parquet(targetFile) ;
}
else {
dfConverted.Write().Mode(SaveMode.Append).Parquet(targetFile) ;
}
dfConverted.Unpersist() ;
finally I have most of my logic inside of a C# method that gets called in a foreach loop with the hopes that the CLR will dispose of anything else I missed.
And last but not least a lesson learned.
When reading a directory containing multiple parquet files it seems
that spark reads all of the files into the data frame.
When reading a directory containing multiple text delimited files that you are
treating as binary files spark reads only ONE of the files into the
data frame.
So in order to process multiple text delimited files out of a folder I had to pass in the names of the multiple files and process the first file with an SaveMode.Overwrite and the other files as SaveMode.Append. Every method of attempting to use any kind of wild card and specifying the directory name only ever resulted in reading one file into the data frame. (Trust me here after hours of GoogleFu I tried every method I could find.)
Again 17 hours into processing not one single error so one important lesson seems to be to keep your memory usage as low as possible.
OK I am adding another answer rather than editing the existing ones. After 113 hours I had 52 errors that I had to reprocess. I found that some of the errors were due to Kryo serialization failed: Buffer overflow. Available: 0, required: 19938070. To avoid this, increase spark.kryoserializer.buffer.max well after a few hours of GoogleFu which also included increasing the size of my spark pool from small to medium (had no effect) I added this as the first cell in my notebook
%%configure
{
"conf":
{
"spark.kryoserializer.buffer.max" : "512"
}
}
So this fixed the Kryo serialization failed issue and I believe that the larger spark pool has fixed all of the remaining errors because they are now all processing successfully. Also jobs that previously failed after taking 2 hours to run are now completing after 30 minutes. I suspect this speed increase is due to the larger spark pool memory. So lesson learned. Do not use the small pool for IIS files.
Finally something that bugged me. when you type %%configure into an empty cell Microsoft so unhelpfully puts in the following crap
%%configure
{
# You can get a list of valid parameters to config the session from https://github.com/cloudera/livy#request-body.
"driverMemory": "28g", # Recommended values: ["28g", "56g", "112g", "224g", "400g", "472g"]
"driverCores": 4, # Recommended values: [4, 8, 16, 32, 64, 80]
"executorMemory": "28g",
"executorCores": 4,
"jars": ["abfs[s]: //<file_system>#<account_name>.dfs.core.windows.net/<path>/myjar.jar", "wasb[s]: //<containername>#<accountname>.blob.core.windows.net/<path>/myjar1.jar"],
"conf":
{
# Example of standard spark property, to find more available properties please visit: https://spark.apache.org/docs/latest/configuration.html#application-properties.
"spark.driver.maxResultSize": "10g",
# Example of customized property, you can specify count of lines that Spark SQL returns by configuring "livy.rsc.sql.num-rows".
"livy.rsc.sql.num-rows": "3000"
}
}
I call it crap because IT HAS COMMENTS IN IT. If you try and just add in the one setting you want it will fail due to the comments. JUST BE WARNED.
Related
Working LabVIEW Code
Attached above is LabVIEW code that I have successfully used in the past to read frequency data from a device. I also usually use the Start Task VI between my property node and while loop.
I am trying to code this in C#. So far I have successfully been able to code analog Output's and analog Input's on my device, USB-6363, (so I know I am able to write and read data from the device successfully with C#).
I have also used multimeters (Grainger link at bottom of post) to read frequency data (Orange Hz mode that the device is set to in the picture).
However, my C# code seems to be having issues reading the frequency data. My C# code is attached. When I try running this program I get the following error. This is the same error that I get when using the example program called 'MeasDigFreqBuffCont_ExtClk_ArmStart.2013'. The code I show is just creating the task, I do call the code later in my program in a different section and that is how I am getting the error.
------------------------------------------------- Begin Error Code -------------------------------------------------
{Error=-200077 Message="Requested value is not a supported value for
this property. The property value may be invalid because it conflicts
with another property.\n\nProperty:
NationalInstruments.DAQmx.CIChannel.FrequencyDivisor\nRequested Value:
1\nPossible Values: 4 to 4294967295\nChannel Name: Digital
Frequency\n\nTask Name: _unnamedTask<0>\n\nStatus Code: -200077"}
------------------------------------------------- End Error Code --------------------------------------------------
In the example program it asks for a sample clock source (A PFI channel from the device). However in the LabVIEW code it does not ask for this. Is this example maybe more in detail than what I am trying to do?
Task frequencyInput = new Task();
frequencyInput.CIChannels.CreateFrequencyChannel(
"Dev1/ctr0",
"Digital Frequency",
200,
15000,
CIFrequencyStartingEdge.Rising,
CIFrequencyMeasurementMethod.DynamicAveraging,
0.001,
1,
CIFrequencyUnits.Hertz
);
frequencyInput.CIChannels["Digital Frequency"].FrequencyTerminal = "/Dev1/PFI0";
CounterSingleChannelReader counterFreq = new CounterSingleChannelReader(frequencyInput.Stream);
double counterFreqData = counterFreq.ReadSingleSampleDouble();
txtPFI0.Text = Convert.ToString(counterFreqData);
FLUKE (R) Fluke-115 Compact - Basic Features Digital Multimeter, 14° to 122°F Temp. Range
Formatting the error message:
Requested value is not a supported value for this property. The property value may be invalid because it conflicts with another property.
Property: NationalInstruments.DAQmx.CIChannel.FrequencyDivisor
Requested Value: 1
Possible Values: 4 to 4294967295
Task Name: _unnamedTask<0>
Status Code: -200077
According to the documentation, you are asking the device to use an invalid divisor. Change your 1 to a 4:
frequencyInput.CIChannels.CreateFrequencyChannel(
"Dev1/ctr0",
"Digital Frequency",
200,
15000,
CIFrequencyStartingEdge.Rising,
CIFrequencyMeasurementMethod.DynamicAveraging,
0.001,
/* here */ 4,
CIFrequencyUnits.Hertz
);
NI installs C# examples for DAQmx, and it includes one for measuring frequency:
C:\Users\Public\Documents\National Instruments\NI-DAQ\Examples\DotNET4.0\Counter\Measure Digital Frequency\MeasDigFrequency_LowFreq1Ctr\CS
I am trying to use the PDBSTR.EXE tool to merge version information into a PDB file and from time to time I encounter the following error:
[result: error 0x3 opening K:\dev\main\bin\mypdbfile.pdb] <- can be a different PDB file.
An example of the command line that I use is:
pdbstr.exe -w -s:srcsrv -p:K:\dev\main\bin\mypdbfile.pdb -i:C:\Users\username\AppData\Local\Temp\tmp517B.stream
Could you tell me what would cause error code 0x3?
If the error code is similar to the standard System error code 3 ERROR_PATH_NOT_FOUND, then it seems to think that the path K:\dev\main\bin\mypdbfile.pdb does NOT exist when in fact it DOES.
However please note that my K: drive is a SUBST'ed drive.
(System error code reference https://msdn.microsoft.com/en-ca/library/windows/desktop/ms681382(v=vs.85).aspx)
Do you know what the 0x3 error code could possibly mean?
If this error code appears from time to time, then i guess the ERROR_PATH_NOT_FOUND might be the real problem.
I guess the cause is, i couldn't see any double quotes wrapping the path you've given as input. When the path contains a folder name with spaces in it, it breaks your path. For ex
pdbstr.exe -w -s:srcsrv -p:K:\dev\main\my folder with spaces\mypdbfile.pdb -i:C:\Users\username\AppData\Local\Temp\tmp517B.stream
Add a double quote around the path and that might solve it. Hope it helps.
I have a C# program that is using QBFCv13 to create 46 customers in QuickBooks Pro 2014.
When the program runs, I get an exception with message "String too long.". I am guessing it's probably caused by one of the customer name is too long so I test the program to create 2 customers with one long name. This time I didn't get an exception. I get a response list with one response containing error code and the other response without error.
I am confused. Why in certain case I get an exception? The message doesn't contain any more message than "String too long". I am wondering if there is something else I can do to figure what is causing this "String too long" error.
Thanks.
Try enabling verbose logging and see if it tells you what the error is.
https://intuitpartnerplatform.lc.intuit.com/questions/177198-troubleshooting-sdk-issues
I am using POS tagger for a project and it works successfully when it reads the tagger file from my computer (project's folder).
But I need to upload the tagger file first and read the tagger file from a URL.
To do so, I have uploaded the POS tagger file and I am trying to read the tagger file by giving the URL to the constructor of the MaxentTagger method: (my code is in C# and I have overridden the MaxentTagger class so it's constructor looks like this:
public Tagger ()
{
java.io.ByteArrayInputStream inputStream = new java.io.ByteArrayInputStream(System.IO.File.ReadAllBytes(#"C:\models\english-left3words-distsim.tagger"));
base.readModelAndInit(null, new java.io.DataInputStream(inputStream), false);
}
However I get this error when I run my code:
"An unhandled exception of type 'java.lang.RuntimeException' occurred in stanford-postagger.dll
Additional information: java.io.FileNotFoundException: Could not find a part of the path 'C:\u\nlp\data\pos_tags_are_useless\egw4-reut.512.clusters'."
Does anybody know why this happens and how I can resolve this? I appreciate any sort of help very much!
This error comes from the program trying to load a file which gives the distributional similarity mapping from words to clusters. It's trying to get it from the location that is specified in the training properties file (and you naturally don't have a file at that location). This happened because you don't have a properly initialized TaggerConfig object at the time readModelAndInit() is called. The way it gets initialized is unintuitive (was badly architected), but you're only encountering this because you're trying to use a non-public API.
Why can't you just use the public API as follows?
MaxentTagger base = new MaxentTagger("http://my.url.com/models/english-left3words-distsim.tagger");
Mode : http://www.emacswiki.org/emacs/CSharpMode
log:
Loading /.emacs.d/contrib/dev/csharp-mode.el
Done loading /.emacs.d/contrib/dev/csharp-mode.el
File mode specification error: (void-function make-local-hook)
Loading vc-git...done
When done with a buffer, type C-x #
(No files need saving)
File mode specification error: (void-function make-local-hook)
When done with a buffer, type C-x #
Making completion list... [2 times]
goto-history-element: End of history; no default available [3 times]
or: Symbol's function definition is void: make-local-hook
mouse-minibuffer-check: Minibuffer window is not active
(No files need saving)
When done with a buffer, type C-x #
(No files need saving)
File mode specification error: (void-function make-local-hook)
When done with a buffer, type C-x #
Making completion list... [2 times]
or: Symbol's function definition is void: make-local-hook
Why that? And how can I fix it?
make-local-hook has been obsolete for years, and was removed entirely in Emacs 24.
You should try to locate an updated version of the library. According to the Wiki page you linked to, the latest version is here:
http://code.google.com/p/csharpmode/
Failing that, there's a pretty good chance that the code only includes those function calls to retain backwards-compatibility with Emacs 20, and that provided there is an appropriate call to add-hook present, all you would need to do is delete all instances of (make-local-hook HOOK) from the code.
Here's the relevant bits of its old docstring:
(make-local-hook HOOK)
This function is obsolete since 21.1;
not necessary any more.
Make the hook HOOK local to the current buffer.
The return value is HOOK.
You never need to call this function now that `add-hook' does it for you
if its LOCAL argument is non-nil.
See also C-hf add-hook RET