|
|||
1. Solaris ZFS File System (Introduction) ZFS Component Naming Requirements 3. ZFS and Traditional File System Differences 6. Working With ZFS Snapshots and Clones 7. Using ACLs to Protect ZFS Files 8. ZFS Delegated Administration |
What's New in ZFS?This section summarizes new features in the ZFS file system. Using Cache Devices in Your ZFS Storage PoolSolaris Express Developer Edition 1/08: In this Solaris release, you can create pool and specify cache devices, which are used to cache storage pool data. Cache devices provide an additional layer of caching between main memory and disk. Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content. One or more cache devices can specified when the pool is created. For example: # zpool create pool mirror c0t2d0 c0t4d0 cache c0t0d0 # zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 cache c0t0d0 ONLINE 0 0 0 errors: No known data errors After cache devices are added, they gradually fill with content from main memory. Depending on the size of your cache device, it could take over an hour for them to fill. Capacity and reads can be monitored by using the zpool iostat command as follows: # zpool iostat -v pool 5 Cache devices can be added or removed from the pool after the pool is created. For more information, see Creating a ZFS Storage Pool with Cache Devices and Example 4-3. Enhancements to the zfs send CommandSolaris Express Developer Edition 1/08: This release includes the following enhancements to the zfs send command.
For more information, see Sending and Receiving Complex ZFS Snapshot Streams. ZFS Quotas and Reservations for File System Data OnlySolaris Express Developer Edition 1/08: In addition to the existing ZFS quota and reservation features, this release includes dataset quotas and reservations that do not include descendents, such as snapshots and clones, in the space consumption accounting.
For example, you can set a 10 Gbyte refquota for studentA that sets a 10-Gbyte hard limit of referenced space. For additional flexibility, you can set a 20-Gbyte quota that allows you to manage studentA's snapshots. # zfs set refquota=10g tank/studentA # zfs set quota=20g tank/studentA For more information, see ZFS Quotas and Reservations. ZFS File System Properties for the Solaris CIFS ServiceSolaris Express Developer Edition 1/08: This release provides support for the Solaris Common Internet File System (CIFS) service. This product provides the ability to share files between Solaris and Windows or MacOS systems. To facilitate sharing files between these systems by using the Solaris CIFS service, the following new ZFS properties are provided:
Currently, the sharesmb property is available to share ZFS files in the Solaris CIFS environment. More ZFS CIFS-related properties will be available in an upcoming release. For information about using the sharesmb property, see Sharing ZFS Files in a Solaris CIFS Environment. In addition to the ZFS properties added for supporting the Solaris CIFS software product, the vscan property is available for scanning ZFS files if you have a 3rd-party virus scanning engine. ZFS Storage Pool PropertiesSolaris Express Developer Edition 1/08: ZFS storage pool properties were introduced in an earlier release. This release provides for additional property information. For example: # zpool get all users NAME PROPERTY VALUE SOURCE users size 16.8G - users used 217M - users available 16.5G - users capacity 1% - users altroot - default users health ONLINE - users guid 11063207170669925585 - users version 8 default users bootfs - default users delegation on default users autoreplace off default users temporary on local For a description of these properties, see Table 4-1.
ZFS and File System Mirror MountsSolaris Express Developer Edition 1/08: In this Solaris release, NFSv4 mount enhancements are provided to make ZFS file systems more accessible to NFS clients. When file systems are created on the NFS server, the NFS client can automatically discover these newly created file systems within their existing mount of a parent file system. For example, if the server neo already shares the tank file system and client zee has it mounted, /tank/baz is automatically visible on the client after it is created on the server. zee# mount neo:/tank /mnt zee# ls /mnt baa bar neo# zfs create tank/baz zee% ls /mnt baa bar baz zee% ls /mnt/baz file1 file2 ZFS Command History Enhancements (zpool history)Solaris Express Developer Edition 9/07: The zpool history command has been enhanced to provide the following new features:
For example, the zpool history command provides both zpool command events and zfs command events. # zpool history users History for 'users': 2007-04-26.12:44:02 zpool create users mirror c0t8d0 c0t9d0 c0t10d0 2007-04-26.12:44:38 zfs create users/markm 2007-04-26.12:44:47 zfs create users/marks 2007-04-26.12:44:57 zfs create users/neil 2007-04-26.12:47:15 zfs snapshot -r users/home@yesterday 2007-04-26.12:54:50 zfs snapshot -r users/home@today 2007-04-26.13:29:13 zfs create users/snapshots 2007-04-26.13:30:00 zfs create -o compression=gzip users/snapshots 2007-04-26.13:31:24 zfs create -o compression=gzip-9 users/oldfiles 2007-04-26.13:31:47 zfs set copies=2 users/home 2007-06-25.14:22:52 zpool offline users c0t10d0 2007-06-25.14:52:42 zpool online users c0t10d0 2007-06-25.14:53:06 zpool upgrade users The zpool history -i option provides internal event information. For example: # zpool history -i . . . 2007-08-08.15:10:02 [internal create txg:348657] dataset = 83 2007-08-08.15:10:03 zfs create tank/mark 2007-08-08.15:27:41 [internal permission update txg:348869] ul$76928 create dataset = 5 2007-08-08.15:27:41 [internal permission update txg:348869] ul$76928 destroy dataset = 5 2007-08-08.15:27:41 [internal permission update txg:348869] ul$76928 mount dataset = 5 2007-08-08.15:27:41 [internal permission update txg:348869] ud$76928 create dataset = 5 2007-08-08.15:27:41 [internal permission update txg:348869] ud$76928 destroy dataset = 5 2007-08-08.15:27:41 [internal permission update txg:348869] ud$76928 mount dataset = 5 2007-08-08.15:27:41 zfs allow marks create,destroy,mount tank 2007-08-08.15:27:59 [internal permission update txg:348873] ud$76928 snapshot dataset = 5 2007-08-08.15:27:59 zfs allow -d marks snapshot tank The zpool history -l option provides a long format. For example: # zpool history -l tank History for 'tank': 2007-07-19.10:55:13 zpool create tank mirror c0t1d0 c0t11d0 [user root on neo:global] 2007-07-19.10:55:19 zfs create tank/cindys [user root on neo:global] 2007-07-19.10:55:49 zfs allow cindys create,destroy,mount,snapshot tank/cindys [user root on neo:global] 2007-07-19.10:56:24 zfs create tank/cindys/data [user cindys on neo:global] For more information about using the zpool history command, see Identifying Problems in ZFS. Upgrading ZFS File Systems (zfs upgrade)Solaris Express Developer Edition 9/07: The zfs upgrade command is included in this release to provide future ZFS file system enhancements to existing file systems. ZFS storage pools have a similar upgrade feature to provide pool enhancements to existing storage pools. For example: # zfs upgrade This system is currently running ZFS filesystem version 2. The following filesystems are out of date, and can be upgraded. After being upgraded, these filesystems (and any 'zfs send' streams generated from subsequent snapshots) will no longer be accessible by older software versions. VER FILESYSTEM --- ------------ 1 datab 1 datab/users 1 datab/users/area51 Note - File systems that are upgraded and any streams created from those upgraded file systems by the zfs send command are not accessible on systems that are running older software releases. However, no new ZFS file system upgrade features are provided in this release. ZFS Delegated AdministrationSolaris Express Developer Edition 9/07: In this release, you can delegate fine-grained permissions to perform ZFS administration tasks to non-privileged users. You can use the zfs allow and zfs unallow commands to grant and remove permissions. You can modify the ability to use delegated administration with the pool's delegation property. For example: # zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation on default # zpool set delegation=off users # zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation off local By default, the delegation property is enabled. For more information, see Chapter 8, ZFS Delegated Administration and zfs(1M). Setting Up Separate ZFS Logging DevicesSolaris Express Developer Edition 9/07: The ZFS intent log (ZIL) is provided to satisfy POSIX requirements for synchronous transactions. For example, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the ZIL is allocated from blocks within the main storage pool. However, better performance might be possible by using separate intent log devices in your ZFS storage pool, such as with NVRAM or a dedicated disk. Log devices for the ZFS intent log are not related to database log files. You can set up a ZFS logging device when the storage pool is created or after the pool is created. For examples of setting up log devices, see Creating a ZFS Storage Pool with Log Devices and Adding Devices to a Storage Pool. You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in a unmirrored storage pool. Consider the following points when determining whether setting up a ZFS log device is appropriate for your environment:
Creating Intermediate ZFS DatasetsSolaris Express Developer Edition 9/07: You can use the -p option with the zfs create, zfs clone, and zfs rename commands to quickly create a non-existent intermediate dataset, if it doesn't already exist. For example, create ZFS datasets (users/area51) in the datab storage pool. # zfs list NAME USED AVAIL REFER MOUNTPOINT datab 106K 16.5G 18K /datab # zfs create -p -o compression=on datab/users/area51 If the intermediate dataset exists during the create operation, the operation completes successfully. Properties specified apply to the target dataset, not to the intermediate datasets. For example: # zfs get mountpoint,compression datab/users/area51 NAME PROPERTY VALUE SOURCE datab/users/area51 mountpoint /datab/users/area51 default datab/users/area51 compression on local The intermediate dataset is created with the default mount point. Any additional properties are disabled for the intermediate dataset. For example: # zfs get mountpoint,compression datab/users NAME PROPERTY VALUE SOURCE datab/users mountpoint /datab/users default datab/users compression off default For more information, see zfs(1M). ZFS Hotplugging EnhancementsSolaris Express Developer Edition 9/07: In this release, ZFS more effectively responds to devices that are removed and provides a mechanism to automatically identify devices that are inserted with the following enhancements:
For more information, see zpool(1M). Recursively Renaming ZFS Snapshots (zfs rename -r)Solaris Express Developer Edition 5/07: You can recursively rename all descendent ZFS snapshots by using the zfs rename -r command. For example, snapshot a set of ZFS file systems. # zfs snapshot -r users/home@today # zfs list NAME USED AVAIL REFER MOUNTPOINT users 216K 16.5G 20K /users users/home 76K 16.5G 22K /users/home users/home@today 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@today 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@today 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@today 0 - 18K - Then, rename the snapshots the following day. # zfs rename -r users/home@today @yesterday # zfs list NAME USED AVAIL REFER MOUNTPOINT users 216K 16.5G 20K /users users/home 76K 16.5G 22K /users/home users/home@yesterday 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@yesterday 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@yesterday 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@yesterday 0 - 18K - Snapshots are the only dataset that can be renamed recursively. For more information about snapshots, see Overview of ZFS Snapshots and this blog entry that describes how to create rolling snapshots: http://blogs.sun.com/mmusante/entry/rolling_snapshots_made_easy GZIP Compression is Available for ZFSSolaris Express Developer Edition 5/07: In this Solaris release, you can set gzip compression on ZFS file systems in addition to lzjb compression. You can specify compression as gzip, the default, or gzip-N, where N equals 1 through 9. For example: # zfs create -o compression=gzip users/home/snapshots # zfs get compression users/home/snapshots NAME PROPERTY VALUE SOURCE users/home/snapshots compression gzip local # zfs create -o compression=gzip-9 users/home/oldfiles # zfs get compression users/home/oldfiles NAME PROPERTY VALUE SOURCE users/home/oldfiles compression gzip-9 local For more information about setting ZFS properties, see Setting ZFS Properties. Storing Multiple Copies of ZFS User DataSolaris Express Developer Edition 5/07: As a reliability feature, ZFS file system metadata is automatically stored multiple times across different disks, if possible. This feature is known as ditto blocks. In this Solaris release, you can specify that multiple copies of user data is also stored per file system by using the zfs set copies command. For example: # zfs set copies=2 users/home # zfs get copies users/home NAME PROPERTY VALUE SOURCE users/home copies 2 local Available values are 1, 2, or 3. The default value is 1. These copies are in addition to any pool-level redundancy, such as in a mirrored or RAID-Z configuration. The benefits of storing multiple copies of ZFS user data are as follows:
Depending on the allocation of the ditto blocks in the storage pool, multiple copies might be placed on a single disk. A subsequent full disk failure might cause all ditto blocks to be unavailable. You might consider using ditto blocks when you accidentally create a non-redundant pool and when you need to set data retention policies. For a detailed description of how setting copies on a system with a single-disk pool or a multiple-disk pool might impact overall data protection, see this blog: http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection For more information about setting ZFS properties, see Setting ZFS Properties. Improved zpool status OutputSolaris Express 1/07: You can use the zpool status -v command to display a list of files with persistent errors. Previously, you had to use the find -inum command to identify the filenames from the list of displayed inodes. For more information about displaying a list of files with persistent errors, see Repairing a Corrupted File or Directory. ZFS and Solaris iSCSI ImprovementsSolaris Express, Developer Edition 2/07: In this Solaris release, you can create a ZFS volume as a Solaris iSCSI target device by setting the shareiscsi property on the ZFS volume. This method is a convenient way to quickly set up a Solaris iSCSI target. For example: # zfs create -V 2g tank/volumes/v2 # zfs set shareiscsi=on tank/volumes/v2 # iscsitadm list target Target: tank/volumes/v2 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 After the iSCSI target is created, set up the iSCSI initiator. For information about setting up a Solaris iSCSI initiator, see Chapter 14, Configuring Solaris iSCSI Targets and Initiators (Tasks), in System Administration Guide: Devices and File Systems. For more information about managing a ZFS volume as an iSCSI target, see Using a ZFS Volume as a Solaris iSCSI Target. Sharing ZFS File System EnhancementsSolaris Express, Developer Edition 2/07: In this Solaris release, the process of sharing file systems has been improved. Although modifying system configuration files, such as /etc/dfs/dfstab, is unnecessary for sharing ZFS file systems, you can use the sharemgr command to manage ZFS share properties. The sharemgr command enables you to set and manage share properties on share groups. ZFS shares are automatically designated in the zfs share group. As in previous releases, you can set the ZFS sharenfs property on a ZFS file system to share a ZFS file system. For example: # zfs set sharenfs=on tank/home Or, you can use the new sharemgr add-share subcommand to share a ZFS file system in the zfs share group. For example: # sharemgr add-share -s tank/data zfs # sharemgr show -vp zfs zfs nfs=() zfs/tank/data /tank/data /tank/data/1 /tank/data/2 /tank/data/3 Then, you can use the sharemgr command to manage ZFS shares. The following example shows how to use sharemgr to set the nosuid property on the shared ZFS file systems. You must preface ZFS share paths with /zfs designation. # sharemgr set -P nfs -p nosuid=true zfs/tank/data # sharemgr show -vp zfs zfs nfs=() zfs/tank/data nfs=(nosuid="true") /tank/data /tank/data/1 /tank/data/2 /tank/data/3 For more information, see sharemgr(1M). ZFS Command History (zpool history)Solaris Express 12/06: In this Solaris release, ZFS automatically logs successful zfs and zpool commands that modify pool state information. For example: # zpool history History for 'newpool': 2007-04-25.11:37:31 zpool create newpool mirror c0t8d0 c0t10d0 2007-04-25.11:37:46 zpool replace newpool c0t10d0 c0t9d0 2007-04-25.11:38:04 zpool attach newpool c0t9d0 c0t11d0 2007-04-25.11:38:09 zfs create newpool/user1 2007-04-25.11:38:15 zfs destroy newpool/user1 History for 'tank': 2007-04-25.11:46:28 zpool create tank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 This features enables you or Sun support personnel to identify the exact set of ZFS commands that was executed to troubleshoot an error scenario. You can identify a specific storage pool with the zpool history command. For example: # zpool history newpool History for 'newpool': History for 'newpool': 2007-04-25.11:37:31 zpool create newpool mirror c0t8d0 c0t10d0 2007-04-25.11:37:46 zpool replace newpool c0t10d0 c0t9d0 2007-04-25.11:38:04 zpool attach newpool c0t9d0 c0t11d0 2007-04-25.11:38:09 zfs create newpool/user1 2007-04-25.11:38:15 zfs destroy newpool/user1 The features of the history log are as follows:
Currently, the zpool history command does not record user-ID, hostname, or zone-name. For more information about troubleshooting ZFS problems, see Identifying Problems in ZFS. ZFS Property ImprovementsZFS xattr PropertySolaris Express 1/07: You can use the xattr property to disable or enable extended attributes for a specific ZFS file system. The default value is on. For a description of ZFS properties, see Introducing ZFS Properties. ZFS canmount PropertySolaris Express 10/06: The new canmount property allows you to specify whether a dataset can be mounted by using the zfs mount command. For more information, see The canmount Property. ZFS User PropertiesSolaris Express 10/06: In addition to the standard native properties that can either export internal statistics or control ZFS file system behavior, ZFS supports user properties. User properties have no effect on ZFS behavior, but you can use them to annotate datasets with information that is meaningful in your environment. For more information, see ZFS User Properties. Setting Properties When Creating ZFS File SystemsSolaris Express 10/06: In this Solaris release, you can set properties when you create a file system, in addition to setting properties after the file system is created. The following examples illustrate equivalent syntax: # zfs create tank/home # zfs set mountpoint=/export/zfs tank/home # zfs set sharenfs=on tank/home # zfs set compression=on tank/home # zfs create -o mountpoint=/export/zfs -o sharenfs=on -o compression=on tank/home Displaying All ZFS File System InformationSolaris Express 10/06: In this Solaris release, you can use various forms of the zfs get command to display information about all datasets if you do not specify a dataset. In previous releases, all dataset information was not retreivable with the zfs get command. For example: # zfs get -s local all tank/home atime off local tank/home/bonwick atime off local tank/home/marks quota 50G local New zfs receive -F OptionSolaris Express 10/06: In this Solaris release, you can use the new -F option to the zfs receive command to force a rollback of the file system to the most recent snapshot before doing the receive. Using this option might be necessary when the file system is modified between the time a rollback occurs and the receive is initiated. For more information, see Restoring a ZFS Snapshot. Recursive ZFS SnapshotsSolaris Express 8/06: When you use the zfs snapshot command to create a file system snapshot, you can use the -r option to recursively create snapshots for all descendent file systems. In addition, using the -r option recursively destroys all descendent snapshots when a snapshot is destroyed. Recursive ZFS snapshots are created quickly as one atomic operation. The snapshots are created together (all at once) or not created at all. The benefit of atomic snapshots operations is that the snapshot data is always taken at one consistent time, even across descendent file systems. For more information, see Creating and Destroying ZFS Snapshots. Double Parity RAID-Z (raidz2)Solaris Express 7/06: A redundant RAID-Z configuration can now have either single- or double-parity, which means that one or two device failures can be sustained respectively, without any data loss. You can specify the raidz2 keyword for a double-parity RAID-Z configuration. Or, you can specify the raidz or raidz1 keyword for a single-parity RAID-Z configuration. For more information, see Creating RAID-Z Storage Pools or zpool(1M). Hot Spares for ZFS Storage Pool DevicesSolaris Express 7/06: The ZFS hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that if an active device in the pool fails, the hot spare automatically replaces the failed device. Or, you can manually replace a device in a storage pool with a hot spare. For more information, see Designating Hot Spares in Your Storage Pool and zpool(1M). Replacing a ZFS File System With a ZFS Clone (zfs promote)Solaris Express 7/06: The zfs promote command enables you to replace an existing ZFS file system with a clone of that file system. This feature is helpful when you want to run tests on an alternative version of a file system and then, make that alternative version of the file system the active file system. For more information, see Replacing a ZFS File System With a ZFS Clone and zfs(1M). Upgrading ZFS Storage Pools (zpool upgrade)Solaris Express 6/06: You can upgrade your storage pools to a newer version to take advantage of the latest features by using the zpool upgrade command. In addition, the zpool status command has been modified to notify you when your pools are running older versions. For more information, see Upgrading ZFS Storage Pools and zpool(1M). If you want to use the ZFS Administration console on a system with a pool from a previous Solaris release, make sure you upgrade your pools before using the ZFS Administration console. To see if your pools need to be upgraded, use the zpool status command. For information about the ZFS Administration console, see ZFS Web-Based Management. Using ZFS to Clone Non-Global Zones and Other EnhancementsSolaris Express 6/06: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool, zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm clone will take a ZFS snapshot of the source zonepath and set up the target zonepath. The snapshot is named SUNWzoneX, where X is a unique ID used to distinguish between multiple snapshots. The destination zone's zonepath is used to name the ZFS clone. A software inventory is performed so that a snapshot used at a future time can be validated by the system. Note that you can still specify that the ZFS zonepath be copied instead of the ZFS clone, if desired. To clone a source zone multiple times, a new parameter added to zoneadm allows you to specify that an existing snapshot should be used. The system validates that the existing snapshot is usable on the target. Additionally, the zone install process now has the capability to detect when a ZFS file system can be created for a zone, and the uninstall process can detect when a ZFS file system in a zone can be destroyed. These steps are then performed automatically by the zoneadm command. Keep the following points in mind when using ZFS on a system with Solaris containers installed:
For more information, see System Administration Guide: Virtualization Using the Solaris Operating System. ZFS Backup and Restore Commands are RenamedSolaris Express 5/06: In this Solaris release, the zfs backup and zfs restore commands are renamed to zfs send and zfs receive to more accurately describe their function. The function of these commands is to save and restore ZFS data stream representations. For more information about these commands, see Saving and Restoring ZFS Data. Recovering Destroyed Storage PoolsSolaris Express 5/06: This release includes the zpool import -D command, which enables you to recover pools that were previously destroyed with the zpool destroy command. For more information, see Recovering Destroyed ZFS Storage Pools. ZFS is Integrated With Fault ManagerSolaris Express 4/06: This release includes the integration of a ZFS diagnostic engine that is capable of diagnosing and reporting pool failures and device failures. Checksum, I/O, device, and pool errors associated with pool or device failures are also reported. The diagnostic engine does not include predictive analysis of checksum and I/O errors, nor does it include proactive actions based on fault analysis. In the event of the ZFS failure, you might see a message similar to the following from fmd: SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Fri Mar 10 11:09:06 MST 2006 PLATFORM: SUNW,Ultra-60, CSN: -, HOSTNAME: neo SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: b55ee13b-cd74-4dff-8aff-ad575c372ef8 DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. AUTO-RESPONSE: No automated response will occur. IMPACT: Fault tolerance of the pool may be compromised. REC-ACTION: Run 'zpool status -x' and replace the bad device. By reviewing the recommended action, which will be to follow the more specific directions in the zpool status command, you will be able to quickly identify and resolve the failure. For an example of recovering from a reported ZFS problem, see Repairing a Missing Device. New zpool clear CommandSolaris Express 4/06: This release includes the zpool clear command for clearing error counts associated with a device or the pool. Previously, error counts were cleared when a device in a pool was brought online with the zpool online command. For more information, see zpool(1M) and Clearing Storage Pool Devices. Compact NFSv4 ACL FormatSolaris Express 4/06: In this release, three NFSv4 ACL formats are available: verbose, positional, and compact. The new compact and positional ACL formats are available to set and display ACLs. You can use the chmod command to set all 3 ACL formats. You can use the ls -V command to display compact and positional ACL formats and the ls -v command to display verbose ACL formats. For more information, see Setting and Displaying ACLs on ZFS Files in Compact Format, chmod(1), and ls(1). File System Monitoring Tool (fsstat)Solaris Express 4/06: A new file system monitoring tool, fsstat, is available to report file system operations. Activity can be reported by mount point or by file system type. The following example shows general ZFS file system activity. $ fsstat zfs new name name attr attr lookup rddir read read write write file remov chng get set ops ops ops bytes ops bytes 7.82M 5.92M 2.76M 1.02G 3.32M 5.60G 87.0M 363M 1.86T 20.9M 251G zfs For more information, see fsstat(1M). ZFS Web-Based ManagementSolaris Express 1/06: A web-based ZFS management tool is available to perform many administrative actions. With this tool, you can perform the following tasks:
You can access the ZFS Administration console through a secure web browser at the following URL: https://system-name:6789/zfs If you type the appropriate URL and are unable to reach the ZFS Administration console, the server might not be started. To start the server, run the following command: # /usr/sbin/smcwebserver start If you want the server to run automatically when the system boots, run the following command: # /usr/sbin/smcwebserver enable Note - You cannot use the Solaris Management Console (smc) to manage ZFS storage pools or file systems. You will not be able to manage ZFS file systems remotely with the ZFS Administration console because of a change in a recent Solaris release, which shutdown some network services automatically. Use the following command to enable these services: # netservices open |
||
|