This is a piece of code for genrating mail which works until I didnt attach path as a parameter . the thing is if I attach the path it didnt throw any error(no logs). Just the page started being unresponsive,and debugger not even jump to next line.
any help wil help me to understand my mistake . Thanks
public ActionResult Mailsending(string list)
{
try
{
string strIdeas = string.Empty;
string Certpath = System.Configuration.ConfigurationManager.AppSettings["UploadPath"];
List<int> list = new List<int>();
List<string> pramAttachment = new List<string>();
pramAttachment.Add(Server.MapPath(Certpath) + "MyPdf.pdf"); ///Path of the generated pdf.
Submitidlist = new CommonBL().GetSubmiidListForGenerateMail();
new CommonBL().UpdateIsGenerateStatus(ideaidlist, UserID);
foreach (var item in ideaidlist)
{
strIdeas = strIdeas + item.ToString() + ",";
}
GenerateMyPDF(list); //Here pdf is generating
string path = GenerateMail(strIdeas.TrimEnd(','));
if (path != string.Empty)
{
new CommonBL().AddGenerateImagePath(path, UserId);
new MailSender().SendMail((int)eMailType.GenerateMail, null, pramAttachment); // here path is added as parameter,and after this debugger not jump out of this scope.
}
return Json("Mail generated Successfully."); ///no message showing
}
catch (Exception ex)
{
return Json("Error");
}
}
Edit :
public class MailSender : IDisposable
{
public bool SendMail(short mailId, List<KeyValuePair<string, string>> parameters, List<string> attachmentsPath = null);
}
Possibly still leaving lock on the generated PDF, so MailSender is not able to access it due to that exclusive lock. Can you send emails with files previously generated?
adding a point which apparently is also an answer of this question, is :
After debugging whole code, I found that my smtp server is not allowing to send me a mail, so even if the above code is right, it shows processing.
So if anyone is working with above code will work fine.
An update : Now it works fine after configuring my mail service from control panel. So if any one wants to take reference from this can go ahead . the code is fine .
Related
I am attempting to get the metadata from a few music files and failing miserably. Online, there seems to be absolutely NO HOPE in finding an answer; no matter what I google. I thought it would be a great time to come and ask here because of this.
The specific error I got was: Error HRESULT E_FAIL has been returned from a call to a COM component. I really wish I could elaborate on this issue, but I'm simply getting nothing back from the COMException object. The error code was -2147467259, and it in hex is -0x7FFFBFFB, and Microsoft have not documented this specific error.
I 70% sure that its not the file's fault. My code will run through a directory full of music and convert the file into a song, hence the ConvertFileToSong name. The function would not be running if the file were to not exist is what I'm trying to say.
The only thing I can really say is that I'm using Dotnet 6, and have a massive headache.
Well, I guess I could also share another problem I had before this error showed up. Dotnet6 has top level code or whatever its called, this means that I can't add the [STAThread] attribute. To solve this, I simply added the code bellow to the top. Not sure why I have to set it to unknown, but that's what I (someone else on Stack Overflow) have to do. That solved that previous problem that the Shell32 could not start, but could that be causing my current problem? Who knows... definitely not me.
Thread.CurrentThread.SetApartmentState(ApartmentState.Unknown);
Thread.CurrentThread.SetApartmentState(ApartmentState.STA);
Here is the code:
// Help from: https://stackoverflow.com/questions/37869388/how-to-read-extended-file-properties-file-metadata
public static Song ConvertFileToSong(FileInfo file)
{
Song song = new Song();
List<string> headers = new List<string>();
// initialise the windows shell to parse attributes from
Shell32.Shell shell = new Shell32.Shell();
Shell32.Folder objFolder = null;
try
{
objFolder = shell.NameSpace(file.FullName);
}
catch (COMException e)
{
int code = e.ErrorCode;
string hex = code.ToString();
Console.WriteLine("MESSAGE: " + e.Message + ", CODE: " + hex);
return null;
}
Shell32.FolderItem folderItem = objFolder.ParseName(file.Name);
// the rest of the code is not important, but I'll leave it there anyway
// pretty much loop infinetly with a counter better than
// while loop because we don't have to declare an int on a new
// line
for (int i = 0; i < short.MaxValue; i++)
{
string header = objFolder.GetDetailsOf(null, i);
// the header does not exist, so we must exit
if (String.IsNullOrEmpty(header)) break;
headers.Add(header);
}
// Once the code works, I'll try and get this to work
song.Title = objFolder.GetDetailsOf(folderItem, 0);
return song;
}
Good night,
Diseased Finger
Ok, so the solution isn't that hard. I used file.FullName which includes the file's name, but Shell32.NameSpace ONLY requires the directory name (discluding the file name).
This is the code that fixed it:
public static Song ConvertFileToSong(FileInfo file)
{
// .....
Shell32.Shell shell = new Shell32.Shell();
Shell32.Folder objFolder = file.DirectoryName;
Shell32.FolderItem folderItem = objFolder.ParseName(file.Name);
// .....
return something;
}
I wrote a program using CSOM to upload documents to SharePoint and insert metadata to the properties. once a while(like every 3 months) the SharePoint server gets busy or we reset IIS or any other communication problem that it may have, we get "The operation has timed out" error on clientContext.ExecuteQuery(). To resolve the issue I wrote an extension method for ExecuteQuery to try every 10 seconds for 5 times to connect to the server and execute the query. My code works in the Dev and QA environment without any problem but in Prod, when it fails the first time with timeout error, in the second attempt, it only uploads the document but it doesn't update the properties and all the properties are empty in the library. It doesn't return any error as result of ExecteQuery() but It seems from the two requests in the batch witch are uploading the file and updating the properties, it just does uploading and I don't know what happens to the properties. It kinda removes that from the batch in the second attempt!
I used both upload methods docs.RootFolder.Files.Add and File.SaveBinaryDirect in different parts of my code but I copy just one of them here so you can see what I have in my code.
I appreciate your help.
public static void ExecuteSharePointQuery(ClientContext context)
{
int cnt = 0;
bool isExecute = false;
while (cnt < 5)
{
try
{
context.ExecuteQuery();
isExecute = true;
break;
}
catch (Exception ex)
{
cnt++;
Logger.Error(string.Format("Communication attempt with SharePoint failed. Attempt {0}", cnt));
Logger.Error(ex.Message);
Thread.Sleep(10000);
if (cnt == 5 && isExecute == false)
{
Logger.Error(string.Format("Couldn't execute the query in SharePoint."));
Logger.Error(ex.Message);
throw;
}
}
}
}
public static void UploadSPFileWithProperties(string siteURL, string listTitle, FieldMapper item)
{
Logger.Info(string.Format("Uploading to SharePoint: {0}", item.pdfPath));
using (ClientContext clientContext = new ClientContext(siteURL))
{
using (FileStream fs = new FileStream(item.pdfPath, FileMode.Open))
{
try
{
FileCreationInformation fileCreationInformation = new FileCreationInformation();
fileCreationInformation.ContentStream = fs;
fileCreationInformation.Url = Path.GetFileName(item.pdfPath);
fileCreationInformation.Overwrite = true;
List docs = clientContext.Web.Lists.GetByTitle(listTitle);
Microsoft.SharePoint.Client.File uploadFile = docs.RootFolder.Files.Add(fileCreationInformation);
uploadFile.CheckOut();
//Update the metadata
ListItem listItem = uploadFile.ListItemAllFields;
//Set field values on item
foreach (List<string> list in item.fieldMappings)
{
if (list[FieldMapper.SP_VALUE_INDEX] != null)
{
TrySet(ref listItem, list[FieldMapper.SP_FIELD_NAME_INDEX], (FieldType)Enum.Parse(typeof(FieldType), list[FieldMapper.SP_TYPE_INDEX]), list[FieldMapper.SP_VALUE_INDEX]);
}
}
listItem.Update();
uploadFile.CheckIn(string.Empty, CheckinType.OverwriteCheckIn);
SharePointUtilities.ExecuteSharePointQuery(clientContext);
}
catch (Exception ex)
{
}
}
}
}
There's too many possible reasons for me to really comment on a solution, especially considering it's only on the prod environment.
What I can say is that it's probably easiest to keep a reference to the last uploaded file. If your code fails then check if the last file has been uploaded correctly.
Side note: I'm not sure if this is relevant but if it's a large file you want to upload it in slices.
I have an app that reads from text files to determine which reports should be generated. It works as it should most of the time, but once in awhile, the program deletes one of the text files it reads from/writes to. Then an exception is thrown ("Could not find file") and progress ceases.
Here is some pertinent code.
First, reading from the file:
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
. . .
private static List<String> ReadFileContents(string fileName)
{
List<String> fileContents = new List<string>();
try
{
fileContents = File.ReadAllLines(fileName).ToList();
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
return fileContents;
}
Then, writing to the file -- it marks the record/line in that file as having been processed, so that the same report is not re-generated the next time the file is examined:
MarkAsProcessed(DelPerfFile, qrRecord);
. . .
private static void MarkAsProcessed(string fileToUpdate, string
qrRecord)
{
try
{
var fileContents = File.ReadAllLines(fileToUpdate).ToList();
for (int i = 0; i < fileContents.Count; i++)
{
if (fileContents[i] == qrRecord)
{
fileContents[i] = string.Format("{0}{1} {2}"
qrRecord, RoboReporterConstsAndUtils.COMPLETED_FLAG, DateTime.Now);
}
}
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
}
So I do delete the file, but immediately replace it:
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
The files being read have contents such as this:
Opas,20170110,20161127,20161231-COMPLETED 1/10/2017 12:33:27 AM
Opas,20170209,20170101,20170128-COMPLETED 2/9/2017 11:26:04 AM
Opas,20170309,20170129,20170225-COMPLETED
Opas,20170409,20170226,20170401
If "-COMPLETED" appears at the end of the record/row/line, it is ignored - will not be processed.
Also, if the second element (at index 1) is a date in the future, it will not be processed (yet).
So, for these examples shown above, the first three have already been done, and will be subsequently ignored. The fourth one will not be acted on until on or after April 9th, 2017 (at which time the data within the data range of the last two dates will be retrieved).
Why is the file sometimes deleted? What can I do to prevent it from ever happening?
If helpful, in more context, the logic is like so:
internal static string GenerateAndSaveDelPerfReports()
{
string allUnitsProcessed = String.Empty;
bool success = false;
try
{
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
List<QueuedReports> qrList = new List<QueuedReports>();
foreach (string qrRecord in delPerfRecords)
{
var qr = ConvertCRVRecordToQueuedReport(qrRecord);
// Rows that have already been processed return null
if (null == qr) continue;
// If the report has not yet been run, and it is due, add i
to the list
if (qr.DateToGenerate <= DateTime.Today)
{
var unit = qr.Unit;
qrList.Add(qr);
MarkAsProcessed(DelPerfFile, qrRecord);
if (String.IsNullOrWhiteSpace(allUnitsProcessed))
{
allUnitsProcessed = unit;
}
else if (!allUnitsProcessed.Contains(unit))
{
allUnitsProcessed = allUnitsProcessed + " and "
unit;
}
}
}
foreach (QueuedReports qrs in qrList)
{
GenerateAndSaveDelPerfReport(qrs);
success = true;
}
}
catch
{
success = false;
}
if (success)
{
return String.Format("Delivery Performance report[s] generate
for {0} by RoboReporter2017", allUnitsProcessed);
}
return String.Empty;
}
How can I ironclad this code to prevent the files from being periodically trashed?
UPDATE
I can't really test this, because the problem occurs so infrequently, but I wonder if adding a "pause" between the File.Delete() and the File.WriteAllLines() would solve the problem?
UPDATE 2
I'm not absolutely sure what the answer to my question is, so I won't add this as an answer, but my guess is that the File.Delete() and File.WriteAllLines() were occurring too close together and so the delete was sometimes occurring on both the old and the new copy of the file.
If so, a pause between the two calls may have solved the problem 99.42% of the time, but from what I found here, it seems the File.Delete() is redundant/superfluous anyway, and so I tested with the File.Delete() commented out, and it worked fine; so, I'm just doing without that occasionally problematic call now. I expect that to solve the issue.
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
I would simply add an extra parameter to WriteAllLines() (which could default to false) to tell the function to open the file in overwrite mode, and not call File.Delete() at all then.
Do you currently check the return value of the file open?
Update: ok, it looks like WriteAllLines() is a .Net Framework function and therefore cannot be changed, so I deleted this answer. However now this shows up in the comments, as a proposed solution on another forum:
"just use something like File.WriteAllText where if the file exists,
the data is just overwritten, if the file does not exist it will be
created."
And this was exactly what I meant (while thinking WriteAllLines() was a user defined function), because I've had similar problems in the past.
So, a solution like that could solve some tricky problems (instead of deleting/fast reopening, just overwriting the file) - also less work for the OS, and possibly less file/disk fragmentation.
I m trying to commit changes to the project to my svn server but it doesnt seem to work.
The Test:
I run the code below with the folder where nothing has been changed and nothing happens as expected, Then I create a new folder and then run the code again, this time nothing seems to happen, the code runs and returns without the error showing it worked.
public bool svnCommitProject(String project, String message)
{
using (SvnClient client = new SvnClient())
{
try
{
SvnCommitArgs args = new SvnCommitArgs();
args.LogMessage = message;
client.Authentication.ForceCredentials(Properties.Settings.Default.Username, Properties.Settings.Default.Password);
return client.Commit(Properties.Settings.Default.LocalFolderPath + Properties.Settings.Default.Username + #"\" + project, args);
}
catch
{
MessageBox.Show("ERROR");
return false;
}
}
}
Suspected Problem:
From looking at this and google i suspect that the problem exists because the file hasnt been "added" to svn control, but im not sure.
Is this the case? and if so how would I go about adding the files which need to be added? I also assume that something similar would be needed for files which are deleted/modified, is this correct and how would I add this in too?
See Find files not added to subversion
Yes, files just dropped into the local working directory doesn't tell the SVN to commit to.
Collection<SvnStatusEventArgs> filesStatuses = new Collection<SvnStatusEventArgs>();
if (!client.GetStatus(localDir, new SvnStatusArgs
{
Depth = SvnDepth.Infinity,
RetrieveRemoteStatus = true,
RetrieveAllEntries = true
}, out workDirFilesStatus))
{
throw new SvnOperationCanceledException("SvnClient.GetStatus doesn't return anything.");
}
filesStatuses.Where(i => i.LocalContentStatus == SvnStatus.NotVersioned).ToList().ForEach(i => svnC.Add(i.Path));
filesStatuses.Where(i => i.LocalContentStatus == SvnStatus.Missing).ToList().ForEach(i => svnC.Delete(i.Path));
I have been working on an IMAP client to get emails from Gmail. My application worked fine until about an hour ago, when attachments stopped being retrieved.
The connection and messaging is being handled by imapX.
Connection is OKAY
Login is OKAY
Getting folders is OKAY
Getting messages is OKAY
At this point attachments.Count == 0. It was working earlier this afternoon so I wonder if I have been over testing and Google have blacklisted my computer for a while? Does anyone know if this is the case? - Been running perhaps once every 5-10 minutes, maybe more at times so this could be a plausible issue.
I have attempted to send a new email with a totally new file and it still does not see the attachment (but it is (always) seeing the messages themselves).
Can any anyone shine some light on this issue?
EDIT : Header includes the following tag {[X-MS-Has-Attach, yes]}
EDIT (code) :
private void PollMailFolders(object state)
{
try
{
if(_imapClient == null || !_imapClient.IsConnected)
_imapClient = new ImapClient(_config.Server, _config.Port, true);
if (_imapClient.Connection())
{
if(!_imapClient.IsLogined)
_imapClient.LogIn(_config.Username, _config.Password);
string dateSearch = string.Format(
"SINCE {0:d-MMM-yyyy}{1}", DateTime.Today.AddDays(-_config.HistoryOnStartupDays),
_isFirstTime ? "" : " UNSEEN");
_isFirstTime = false;
foreach (Folder folder in _imapClient.Folders["SSForecasts"].SubFolder)
{
var messages = _imapClient.Folders[folder.Name].Search(dateSearch, false);
foreach (Message m in messages)
{
m.Process();
foreach (var a in m.Attachments)
{
SendDataToParser(_encoding.GetString(a.FileData), folder.Name);
}
m.SetFlag(ImapFlags.SEEN);
}
}
}
}
catch(Exception e)
{
_diagnostics.Logger.ErrorFormat("Error in PollMailFolders: {0}", e);
}
}
I have produced a work around which allows me to get the attachment data. Not the solution I had in mind, though it does work.
Simple filename extension check followed by a conversion of the message data.
BTW: _encoding = Encoding.GetEncoding(1252);
if (bodyPart.ContentFilename.EndsWith(".csv"))
{
return _encoding.GetString(Convert.FromBase64String(bodyPart.ContentStream));
}