MLtek Software is a company based in the United Kingdom and they are known for producing a range of programs that are aimed at tackling various IT infrastructure challenges. Everyone on their staff has had experience working in the IT infrastructure field and is quite familiar with the unique challenges this field presents.
Each of their products was developed after realizing that it would be easier to develop an application that would make some problems, they were having easier to do. All their solutions are created to provide a very simple, unique and effective answer to a challenging problem.
This is how their first product ArchiverFS was developed. ArchiverFS is 100% fully compatible with DFS – all versions. DFS stands for Distributed File System. Because ArchiverFS works on the level of the file and does not use any client-side software or low-level file system drivers, it integrates quite seamlessly with all windows features including DFS.
Targeting the live file system
When you are setting up archiving jobs that target a DFS share it is critical that you consider how ArchiverFS works with everything based on UNC paths.
- Live locations can be added to a job through their path of UNC.
- Archive endpoints are added to a job via UNC path.
- When shortcuts are formed of any kind, they point to moved files via their UNC path.
It is important that you have pointed ArchiverFS to a DFS based file system for using domain-based path and not the based path of the local server. If you use a path in a DFS setup of the local server that has multiple real servers supported by a domain grounded DFS root, then when files get moved, they will be recorded has having been moved from the detailed single server and not the DFS root path. This could cause several problems that you don’t want.
New Reporting Module Phase 1 Complete!
MLtek has wanted to re-do the reporting function in ArchiverFS for some time now and it gives them excitement to announce that phase 1 is now completed. Version 3.471 includes this latest statistics collection function that does away with the separate reporting services, leading to improve performance and scalability.
Phase 2 that will consist of a completely new interface for the reporting module will hopefully be ready for version 3.472 that is currently scheduled for release in early to mid-February.
With ArchiverFS – Save time and money
Have you even wondered why there is not a simple yet effective way to tackle years’ worth of old files that have built up on your file server? Why do all products currently on the market with tools that are accessible revolve around trying to force a database to deal with what is essentially an issue of a file system? Well, ArchiverFS is different…
File system level
ArchiverFS works at the file system level and gives you a structured way to migrate all your old and unused files to 2nd tier storage, without trying to store files, pointers to files or even file metadata in a database.
You get all the features you would expect such as:
- Stubs that are seamless that can be left in place of old files once they have been migrated
- A wealth of choices to control how files are migrated like file age, size, type, etc.
- Ability to compress files once they have been moved
But you also will get scalability that is massive, agent-free operation as well as compatibility with many technologies such as de-duplication and DFS. If you are interested, there are more details for ArchiverFS that can be found on this website.
Note about replication
If you use a DFS based file system with multi-servers then you are possibly using replication. If you are, then you are probably serving more than one geo-location with your infrastructure.
When running the first job
When you run the first job, you might possibly end up archiving a substantial part of your file system (depending on the settings you selected).
Lots of data to replicate
If this is what happened, then you need to realize that DFS probably has a lot of data to “replicate”. When doing the very first archive job on a “DFS based file system”, it is advised that you follow these steps:
- Disable the maintenance job
- Run the first job
- Keep up on the DFS replication queues
- Once this job has cleared, then run the maintenance job
A DFS based destination
If you manage several geo-locations and they all access one single DFS root that is replicated across all locations, then you might consider archiving to a DFS root and replicating it to all your sites.
Requires 2nd line storage
While this would require second line storage that is accessible at each location, it means that when user’s access moved files via stub\links their request will be served via the local network and not over the WAN. Depending on the speed of the WAN connection as well as the size of the file this might speed up the process of any needed retrieval.
This is just some of the basic reasons why this program works so fast and is so cost-effective. It is used by many companies of various sizes and types over the world – many being house-hold names.