Ea records processed - how to achieve
What is EA records processed?
EA Records are Extended Attribute records. They're a feature of NTFS that allows for a file to have custom extra metadata stored along with it (metadata that is not interpretable to the file system). They don't indicate any kind of problem with your file system or operating system.
(light music) (whooshing) (whooshing) - Hello, in this Lightboard session we are going to talk about processing records or looping records.
Now we have a number of options available to you, consider if I am just transforming a structure-level collection? Am I modifying and mapping the structure? In this case, DataWeave. So you have a DataWeave transform that is ideal for this type of use. So this is your first option, there is a Map function that you can use to do that.
So I don't think you have to do anything too complex about using other areas. Now, if you need to run lines of code or event processes on each record, this is the time to start considering using ranges. The easiest way to get started with your looping needs would be through the 'For Each' section.
The 'For Each' section has the ability to put bits of flow code, event processes within the section, and it will loop. Remember that for each record there is also a mode that you can say you want it to do in pieces. So you want to do it with two records, 10 records at the same time.
So there's the ability to do bulk operations here if you need a mini-collection out of your total collection, and it's a single-threaded thing. If you want to throw multiple threads at the problem, you need to think of other solutions to the work queue managing it. It has a number of steps that you go through each record.
And then there is an onComplete which gives you a summary of how everything is going with a major overhead. Unlike For Each, which is processed sequentially. And it has the option of giving you a summary at the end.
Well the last one, based on your knowledge, is to use queuing, as queuing is a way of balancing the load as well. This is how you can put messages into a kind of queue o We know the VM queues. We could switch to JMS and there are other messaging providers that offer a similar type of capability.
Now VM queues are a built-in feature that could be because the consumer needs to be outside the mural ecosystem or the manufacturer is outside the mural ecosystem. So queuing records is another way to process records by queuing things and then having one or more consumers read them in. If all you need to do is create a structure, consider DataWeave.
If you can having to loop over sequential loops with a single thread, For Each. If you have multiple threads at the problem and need to walk each through a series of steps, even with the option like For Each can do bulk operations because you can run your batch aggregator. And the last one is to use queues.
Remember that if you want to see what the result is, For Each will have everything it was It doesn't change structure, unlike the DataWeave transform which does. And as for the batch, again, whatever was in it at the beginning, you're not going to change the structure. Okay, so summarize the options, if you want to transform the structure, use DataWeave Transform, it takes something, creates something New when using the map operator or map function.
For each, we have the same payload that you received as part of the processing that happened here, don't overwrite what you had along the way. If you are using batch, the output is a batch job result. That will give you some summary information about the number of records and the like.
And what about queues? Well it depends on how you use the queue, but generally you publish in, and then there is asynchronous processing. So the answer depends on what you do next and is usually processed asynchronously with your four options for processing many records. (bright music)
What is a reparse record?
Reparse Points are a feature of NTFS that provide a mechanism for file system filter drivers to intercept a file access request and potentially rewrite it. They provide the mechanism that powers several other NTFS features: Volume mount points. Directory junctions.
When troubleshooting hard drive errors, you will likely be removing hard drives and replacing them in a computer.
So you should use a good set of screwdrivers. Crosspoint and Torx connections. These are not magnetized, so that you cannot influence anything inside your computer with a screwdriver.
And you never want to open the drive all the way so you can see the platters spinning, wanting to keep everything inside that drive intact. Another nice troubleshooting tool is an external hard drive enclosure. This allows you to take a drive out of a computer, put it in an enclosure like this one, and access it through a USB connection.
That way, you can at least recover some of the files even if you can no longer boot from the drive. In the Windows operating system, you may want to use the check disk command to analyze the file system, running CHKDSK with / f will scan the file system and look for any logical problems. And if it finds any problems, it will repair the hard drive by itself.
There is also a CHKDSK / r that goes through the entire drive and reads through each sector to identify potentially bad sectors on the drive. When it finds problems, it recovers any information it might have and makes those particular sectors unreadable. When you run a CHKDSK / r, it also runs the / f function.
So, not only are you looking for any logical problems, you are also checking for physical problems. If the volume is in use by another process, you must run this check disk during the startup process. So, for example, you can run CHKDSK / f that particular CHKDSK cannot run because the volume is in use by another process, do you want to schedule this volume to be checked the next time the system reboots, and then you can say yes and it says the volume will be checked on the next reboot and it will take you back to a command prompt, when you reboot y our system says a disk check has been scheduled.
To skip the test, press any key within nine seconds. And if you go ahead with the test, you will see how it does the actual disk test. You can see files are being checked, it will do the index check and then finally let you know if there are any problems with any part of the file system.
At this point there is a summary of the exact actions. If there were problems, it tells you what the problems were, it identifies bad sectors and identifies them and gives you an overview of what exactly was found and corrected during the scan. If you are configuring a drive with separate partitions, you must first put a file system on that partition capable of reading or writing information from it.
And the Windows format command allows you to initialize that particular partition with a file system. If you're doing this with a partition that already contains data, use the format command to remove everything on that drive you've T o Be very careful when performing this particular function. The format command uses the letters assigned to this partition during the partition creation process.
If we use the space k colon format, we are performing standard formatting for the drive partition, which is saved as. drive k. If you accidentally format a partition or delete some files that you didn't want to delete, you may want to use file recovery software like this one.
This is Recuva. It allows you to get into your file system and drive, and find files that may have been deleted but not yet overwritten. This is great if you accidentally format or if a virus infects your computer and starts deleting files from your hard drive.
If you've erased a volume or there might be bad sectors on a drive, these file recovery programs are very good at preserving as much data as possible, even if some of it is damaged on the side of a camera, you can learn more about Recuva that is a totally free program and works exceptionally well, at piriform.com. Since Windows stores your documents on a hard drive, it breaks up the tracking into small pieces and stores those pieces wherever there might be free space on your hard drive.
It is very common for a single file to be in many, many different parts spread across a hard drive. Whenever you need to get that file, your hard drive has to go to all of these different locations to finally put that file back together. And that takes time.
To fix some of these performance issues, you can have your Windows operating system capture all of the pieces of this file and write them to your drive in one continuous piece. This will obviously improve your read and write time since you now have a place to access that single file. This fragmented file problem will only be a performance problem with rotating hard drives.
If you use an SSD in your computer, you will have instant access to all of these file fragments. And there is no performance penalty when you save these files in different fragments. You can run the defragmentation through the properties of a hard disk.
You can see the Defragmentation Now Done option in the local disk properties. Once you are at the command line, you can start defragmenting by running the defrag command. You can schedule this weekly, and many of your Windows operating systems do this by default, so you always have the best possible access to all of your files.
What reparse means?
There is a secret list of filenames that Microsoft does NOT want to use.
So much so that you literally can't create files with those names in Windows. In reality, things aren't as scary as I made them sound like, but it's so true, and actually interesting why. And even if you've heard of these forbidden filenames, stay tuned because I'm going to go over a lot more similar things that you probably didn't know.
Now you may have actually seen these forbidden filenames. The ones already mentioned are: AUX, CON, PRN, NUL and LPT0 through 9 and COM0 through 9. For the latter, you can just use COM or LPT or even COM10 or COM, it can be easy no single digit after.
And these are prohibited no matter what file extension you add or no file extension. Of course, I will go into what these are all for later and why they are forbidden. You don't know this even if you've heard about it.
Because LPT and COM prohibit not only digits after them, but also certain nicodenumber digits, for example superscript 1, 2 and 3, after which you cannot create the file either. And instead of the previous error it says 'This item could not be found'.
What are USN bytes?
A redirection capability in the Windows NTFS file system. Containing up to 16KB of data and a tag indicating their purpose, reparse points are somewhat similar to Windows shortcuts and Unix symbolic links. A reparse point may be used to point to a file that has been temporarily relocated on a different drive.
Richard Davis from the 13-Cube Channel is a great example of what I really love about YouTube.
He is a professional at forensics and is kind enough to share his experience and knowledge in various articles on digital forensics and incident response. This is an industry and topic that I have absolutely no idea and that's why I was very excited that he said yes. When I asked him if he would like to make a guest article for the live overflow in this particular article, he will show how to use two different tools to collect time series data from a system and then analyze it with one time series tool andJust by talking about it.
I also learned about prefetch files on Windows. I didn't know if that was a thing. They are metadata files that I used to make applications start up faster and apparently they are very useful for forensics.
Anyway, I hope you will enjoy this article and check out his other articles on his channel as well, Hello and welcome back to a special episode of 13 diced in this article, we're going to look at creating a timeline like we did when we introduced it in plazo heimdal in the introduction to the windows forensics series, only with some major differences in this article we have logged a timeline create a super timeline, but did you know that we also include artifacts with timestamps from memory in this Timelines can incorporate a lot of storage artifacts that we can actually analyze, which contain temporal information associated with them, including processes, network connections, and even registry keys and event logs that can be extracted from memory in this episode, we create a complete picture of all file system and storage-related events This is from a Windows 10 Virtual Machine that incorporates both traditional disk-based forensics and storage forensics, so we'll traverse the streams and combine content from the Introduction to Windows Forensics and Introduction to the Storage Forensics series using first we use the FLS Sleuth Kits utility to create a system timeline file, then we use volatility along with the timeline err plugin to create a timeline file. Next, we concatenate both sets of data and use the Sleuth-Kits-Mac time parser to generate CSV output for this data. Lastly, let's look at the results with Eric Zimmerman's timeline on Explorer, so let's start, the first thing we're going to do is run FLS with no options so we can see what's available and as you can see there are some options that we can use very often three are used together and these three are - recourse to directory entries - d - show only deleted entries and then - P - show the full path for each file, but for our use case it is ok ing is a slightly different set of options, that we want to use.
We use -R for a recursion and then we use -m to display the output in input format for Mac time after -M specifies whatever text will serve as the mount point identifier in our case it becomes C colon but we could literally specify any string we want so we use F LS - R -MC colon and then the path to the file on the external drive I connected is under mediaDavis RG 13 Cubed Disk and then there is ours i / o a file will output this to FL's body now this is going to take a few minutes so we will come back when it finishes and then take a look at the results and move on to the next step ok, now FLS is complete and us have a FL point body file that was created for our output. So let's take a look at the first 50 lines of output for this body file. And as you can see, we see what looks like NTFS attributes, and we see pipe separated data that looks to be valid data? Now let's take a look at our next step, which will clear the screen and we'll go ahead and run the volatility.
We specify the image file that is back on the external drive and by chance already know that this is a Windows 10 image and I know the build number, if I didn't, I could of course provide image info or the run more powerful KDB-g-scan to make sure I'm using the correct profile, which of course is very important with volatility. Otherwise, we will get unpredictable results. So we specify when 10 X 64 17 130 is specified for the important Timeline Err plug-in We specify the output right away and then of course the output file, which is simply placed in the current directory and called time liner dot body Now like with FLS.
This will take a few minutes. We'll be back in a few minutes when it's done and pick up where we left off. So now we're running the timeliner plug-in against the memory dump that comes from the same Windows 10 virtual machine, now you'll notice a warning message that is safe to ignore, but it appears that the timeliner completed successfully and we have our resulting timeline - or have body file and our stop body file from FL.
So as before, let's go ahead and take a look at some of the output from the timeline or body file to make sure it looks valid before we proceed. So look at the first 50 lines and again we see data separated by pipes. I see pigs and piss pads along with various process information.
So it looks like the output is correct. So let's go ahead now and chain the time liner body2 larger than andThen FL's stop body. So we're going to append the data to FL's body, which is now 577,000 lines long, of course, and 22,000 of which come from the time line dot body file.
Now we have all of our data in one place. Our next step will be generating the CSV output of these dot-body files, to do this we are going to run mac time. And as you can see here, our available options are, we'll use the hyphen Z to indicate the time zone the data was collected from, and then the hyphen D to indicate the output and comma-separated format.
Now the hyphen B must also be used to just say that we don't want to use standard input, we want to use a body file as input, so we run mac time - z UT c- d -b and then our FL point body file naturally routes this into the time line dot -CSV file and this file will be the file we are looking at in the last section of the article with the Timeline Explorer. So let's run that and come back when it's done and as you can see just as an example I opened this up with libreoffice calc and got a bit of output here. So it looks like the timeline CSV file was generated successfully and if I keep scrolling here you will see that there is a tremendous amount of output.
Now we could use something like Libre Office or Excel to do our analysis, but as you will see in the final section of the article, we are going to be using a tool specially designed for this called Timeline Explorer and I think you will see why it is more suitable . Let's take a look at that in the last section of the article nextBack in the Back in the days when we create timelines we use the awesome Excel timeline color template published by Sans as shown in the screenshot. This template would color-code different types of artifacts to make it much easier for the analyst to search through huge amounts of timeline data to identify items of interest, still an option today, the easier way is to use the Timeline Explorer written by Eric Zimmerman now, if If you've seen 13 other dice articles, it won't be a surprise that I'm a huge Zimmerman fan. s tools if you are familiar with Registry Explorer or any other software it wrote.
Then you will feel right at home with the Timeline Explorer, and if not, if you have even a basic knowledge of Microsoft Excel, don't worry, I think you will find that it is a very easy to use intuitive tool now. And the first thing we do is go to the Help menu and select Legend and As you can see, each of these colors represents a certain type of activity associated with elements in the timeline, these colors are the same as those used by the previously mentioned Sans template become. If you are already used to that, then you will feel at home here, I also mention that under Help.
Here is a quick section of help that you might want to refer to. Much of what we cover in this section of the article is explained here. So remember now let's go to the file menu and select Open and we'll select our timeline CSV file that we created in the previous section of the article.
It only takes a few moments to load the article. Let me draw your attention to the blue status bar at the bottom, at the bottom left you can see the full path and filename of the file we have opened, and at the bottom right we see the total number of lines in the file as well as the total number of visible lines that are ever according to the filter options we applied shortly. At the top we see our column headings including row tag timestamps our filesystem Mac B timestamps metafile name and file size Now some of these will no longer be applicable depending on the type of artifact, but you'll notice this as we scroll through the data that immediately opens up You will notice a number of things in red that are in brackets process brackets our memory dump processed by the timelinerVolatility plugin that we ran would normally not exist within a normal FLS filesystem timeline, here we see the filenames for the processes the process ids the parent process of Dinner Fires and some additional information as you scroll down here.
We will also notice a thread section listed as parenthesis-thread-parenthesis which also has pid 'and ted information for threads extracted from memory? You will find that the processes themselves are red, which is evidence of the program execution as we can see in the legend, the first thing I want to mention is that when using the err timeline, it is not uncommon to see invalid dates or missing dates . This plugin does its best to extract these timestamped artifacts from memory by which I mean that what you are looking for may or may not be in a particular memory image. The date of your search may have been outsourced or otherwise corrupted.
So remember, now you will see a space in the top left that works exactly as you would expect if I was looking for something like svchost.exe and then hit Enter or Enter, the search will be done and we'll give it a few moments to return the results and you will see here thatsvchost.exe, the first occurrence of it is right here, and if we look further we will find that 1 out of 10,500 23 references were found to it something special and works like any other find blink in a normal application Well go ahead and close the outPower filter is one of my favorite sections, this one actually does a logical or out So here I am going to put three terms in first we type Richard, then 13 cubes then development Now I didn't have to press Enter when I started typing the filter was applied immediately and again because this is a logical or no logical ending tWe'll find any data that has Richard or thirteen cubes or development.
Right away on the screen we see things like Richard's notebook url, we have something called development plan-dot-docx, and we can clearly see its file size and location, we have linked files that are light green We have information about a 13 cubic underlined Logopenicfile as well as an alternative data stream for the zone identifier and various other information here. You'll notice it below, right? We have listed visible rows with 26. So there were 26 items displayed after applying this filter.
So again, this is an extremely powerful feature of the software power filters. Go ahead and delete what's there and this time we type CMD dot exe and again without hitting Enter it will automatically filter for cmd.exe after a few seconds.
Now we will immediately notice that we are seeing things like NIR cmd.exe so in other words, it doesn't have to be a complete match. It may be that cmd.exe was part of a file name and we even see things listed in black here like SBE cmd.exe which, according to our legend, is evidence of file deletion as we keep scrolling here.
We will notice that there are numerous other items here, including one in red that appears to be a prefetch file. It's red because it shows the program execution for cmd.exe, which of course makes perfect sense if you're familiar with prefetch.
We see another entry in black that stands for something deleted and then here we see memory related information for cmd.exe in addition to our filesystem timeline information. So here we see two different pits as well as the higher-level pits.
Notice that the parent pit of 4748 is the same for both processes and then of course the two PIDZ are different and then below we see more filesystem artifacts for cmd.exeprefetch which indicate the program execution. Now if I wanted to clear the performance filter I could also go to the filename column and look for parenthesis process parenthesis, and when I do that it will of course show all the processes extracted from memory.
And you will find that the earlier 47 was 48 for our cmd.exe which happens to be here and that is Explorer dot exe, so the parent process for this CMD was explorer.exe which makes perfect sense as it started from Windows Explorer has been.
Now of course we could spend quite a bit of time analyzing this data as there are nearly 1 million lines of output using the filter we applied. Click the 'Clear Filters' box and you'll notice it below, right? We reset visible rows to 95694 so nothing will be filtered, but I'll quickly scroll through this data just to give you an idea of how much we were able to extract from the combined file system and storage timeline, you'll notice this by default too, note . This is sorted by line and tries to show the information in chronological order as I scroll up.
You'll find that the data in the timestamp column goes further back in time, I'll keep scrolling up and again just a huge amount of information. By the way, everything that was extracted from FLSThat was a filesystem timeline event. Will of course have Mac B timestamps as well as meta information for the NTFS file system.
So just a huge amount of useful information in itself the FLS utility would already provide useful data.But of course, now that we have storage to take with us, we can get even more data from our investigations.Now I could spend some time doing other things in Looking through this dataset, but I hope this gives you an idea of the power of including these two types of data, as well as the power of the Timeline Explorer for a quick analysis of the data.
So again, don't forget the column headings and the ability to post that information too Don't forget the option to click under the column headings and just search for something. And then of course we have our normal search empty and my favorite the power filters blink, if you too hover over the question mark with the mouse, you can actually click it and it a power filter help is displayed that offers some additional information and fancy searches you can see like here. We can exclude or include certain things, we can negate certain options or stack query strings in different ways to make our searches very detailed.
So I would definitely refer you to the Power Filter Help to get the full power of this software. But that sums up what I wanted to show you in this article and I hope it was helpful. Not just how the Timeliner plug-in works and how it can be combined with FLS.
But also how the Timeline Explorer works and that covers everything I wanted to cover here. I always want to thank you for watching. I hope you found the information even useful, informative, and I'll catch you in the next episode
Where are EA records processed in Windows 10?
56504 EA records processed. Both reparse point and EA INFORMATION attributes exist in file 0x2a3. due to the presence of reparse point in file 675. Both reparse point and EA INFORMATION attributes exist in file 0x5c9. due to the presence of reparse point in file 1481.
How many reparse records have been processed by EA?
If you have ever run a Check Disk (Chkdsk.exe) you will see results like 4 EA RECORDS PROCESSED and 76 REPARSE RECORDS PROCESSED. At first review, these are likely something that concerns you but they should not:
What's the difference between EA Records and attribute Records?
EA Records are Extended Attribute records. They’re a feature of NTFS that allows for a file to have custom extra metadata stored along with it (metadata that is not interpretable to the file system). EA records are a somewhat obscure feature...
Why do I have EA Records in NTFS?
At first review, these are likely something that concerns you but they should not: EA Records are Extended Attribute records. They’re a feature of NTFS that allows for a file to have custom extra metadata stored along with it (metadata that is not interpretable to the file system).