diff --git a/docs/examples.md b/docs/examples.md index 9ab0e12..d2cd471 100644 --- a/docs/examples.md +++ b/docs/examples.md @@ -10,444 +10,372 @@ The following examples were run with Claude Sonnet 4.5 large language model. Below are example questions that work well with the ONTAP MCP Server: -### Volume Provisioning +### CIFS shares -**Create a Volume** - -- On the umeng-aff300-05-06 cluster, create a 20MB volume named docs on the marketing svm and the harvest_vc_aggr aggregate -- Expected Response: Volume "docs" has been created successfully on the umeng-aff300-05-06 cluster with 20MB size on the marketing SVM using the harvest_vc_aggr aggregate. - -**Resize a Volume** - -- On the umeng-aff300-05-06 cluster, resize the docs volume on the marketing svm to 25MB. - -- Expected Response: Volume "docs" has been successfully resized to 25MB on the marketing SVM. - -- On the umeng-aff300-05-06 cluster, increase the size of the docs volume on the marketing svm by 5MB. - -- Expected Response: Volume "docs" has been successfully increased by 5MB to 30MB on the marketing SVM. - -**Volume Autogrowth** - -- On the umeng-aff300-05-06 cluster, enable autogrowth on the docs volume in the marketing svm +**Create a CIFS share** -- Expected Response: Autogrowth has been enabled on the "docs" volume in the marketing SVM on the umeng-aff300-05-06 cluster. +- On the umeng-aff300-05-06 cluster, create a CIFS share named cifsFin at the path / on the marketing svm +- Expected Response: CIFS share created successfully. -- Increase the docs volume autogrow percent to 62 +**Update a CIFS share** -- Expected Response: The autogrow threshold for the docs volume has been successfully increased to 62%. The volume will now automatically grow when it reaches 62% capacity. +- On the umeng-aff300-05-06 cluster, update the CIFS share named cifsFin. Change its path to /cifsFin on the marketing svm +- Expected Response: CIFS share updated successfully. --- -### Manage QoS Policies - -**List All QoS Policies** - -- On the umeng-aff300-05-06 cluster, list all qos policies - -- Expected Response: A list of all QoS policies including both SVM-scoped and cluster-scoped policies. - -**List QoS Policies for a Specific SVM** - -- On the umeng-aff300-05-06 cluster, list all qos policies for the marketing svm - -- Expected Response: All QoS policies for the marketing SVM plus all cluster-scoped policies (cluster-scoped policies are always included). - -**List Only Fixed QoS Policies** - -- On the umeng-aff300-05-06 cluster, list all fixed qos policies - -- Expected Response: A list of fixed QoS policies including cluster-scoped ones. - -**List Only Adaptive QoS Policies** - -- On the umeng-aff300-05-06 cluster, list all adaptive qos policies - -- Expected Response: A list of adaptive QoS policies including cluster-scoped ones. - -**Create a QoS Policy** - -- On the umeng-aff300-05-06 cluster, create a fixed QoS policy named gold on the marketing svm with a max throughput of 5000 iops. - -- Expected Response: The fixed QoS policy "gold" has been successfully created on the marketing SVM with a maximum throughput of 5000 IOPS on the umeng-aff300-05-06 cluster. - -**Apply a Named QoS Policy to a Volume** - -- On the umeng-aff300-05-06 cluster, apply the gold QoS policy to the docs volume on the marketing svm - -- Expected Response: The QoS policy "gold" has been successfully applied to the "docs" volume on the marketing SVM. - -**Create a Volume with a Named QoS Policy** - -- On the umeng-aff300-05-06 cluster, create a 20MB volume named docs on the marketing svm and the harvest_vc_aggr aggregate with QoS policy gold - -- Expected Response: Volume "docs" has been created successfully on the marketing SVM with QoS policy "gold" applied. +### FCP -**Create a Volume with Inline QoS Limits** +- On the umeng-aff300-05-06 cluster, enable fcp service in marketing svm +- Expected Response: The fcp service has been successfully created. -- On the umeng-aff300-05-06 cluster, create a 20MB volume named docs on the marketing svm and the harvest_vc_aggr aggregate with an inline QoS limit of max_iops 300 +- On the umeng-aff300-05-06 cluster, create fc interface fc1 in marketing svm at port 0e in node umeng-aff300-01 of fcp data protocol +- Expected Response: The fc interface has been successfully created. -- Expected Response: Volume "docs" has been created successfully with an inline QoS limit of 300 IOPS. +- On the umeng-aff300-05-06 cluster, delete fc interface fc1 in marketing svm +- Expected Response: The fc interface has been successfully deleted. -**Update Inline QoS Limits on a Volume** +- On the umeng-aff300-05-06 cluster, update fcp service to disable on the marketing svm +- Expected Response: The fcp service has been successfully updated. -- On the umeng-aff300-05-06 cluster, update the docs volume on the marketing svm setting an inline QoS limit of max_iops 150 +--- -- Expected Response: The inline QoS limit on volume "docs" has been updated to a maximum of 150 IOPS. +### iGroups (SAN) -**Switch a Volume from Inline QoS to a Named QoS Policy** +**Create an iGroup** -- On the umeng-aff300-05-06 cluster, apply the gold QoS policy to the docs volume on the marketing svm +- On the umeng-aff300-05-06 cluster, create an igroup named igroupFin with OS type linux and protocol iscsi on the marketing svm +- Expected Response: igroup created successfully. -- Expected Response: The QoS policy "gold" has been successfully applied to the "docs" volume, replacing the previous inline QoS limits. +- On the umeng-aff300-05-06 cluster, create lun map of lun named /vol/docs/lunpayroll and an igroup named igroupFin on the marketing svm +- Expected Response: lun map created successfully. -**Switch a Volume from a Named QoS Policy to Inline QoS** +- On the umeng-aff300-05-06 cluster, delete lun map of lun named /vol/docs/lunpayroll and an igroup named igroupFin on the marketing svm +- Expected Response: lun map deleted successfully. -- On the umeng-aff300-05-06 cluster, update the docs volume on the marketing svm setting an inline QoS limit of max_iops 500 +**Rename an iGroup** -- Expected Response: The inline QoS limit on volume "docs" has been set to a maximum of 500 IOPS. +- On the umeng-aff300-05-06 cluster, rename igroup igroupFin to igroupFinNew and os type as windows on the marketing svm +- Expected Response: igroup updated successfully. -**Remove Inline QoS Limits from a Volume** +**Add an Initiator to an iGroup** -- On the umeng-aff300-05-06 cluster, update the docs volume on the marketing svm and remove the inline QoS limit by setting max_iops to 0 +- On the umeng-aff300-05-06 cluster, add initiator iqn.2021-01.com.example:test to igroup igroupFinNew on the marketing svm +- Expected Response: initiator added to igroup successfully. -- Expected Response: The inline QoS limit on volume "docs" has been removed. The volume no longer has a maximum IOPS constraint. +**Remove an Initiator from an iGroup** -**Remove a Named QoS Policy from a Volume** +- On the umeng-aff300-05-06 cluster, remove initiator iqn.2021-01.com.example:test from igroup igroupFinNew on the marketing svm +- Expected Response: initiator removed from igroup successfully. -- On the umeng-aff300-05-06 cluster, remove the QoS policy from the docs volume on the marketing svm +**Delete an iGroup** -- Expected Response: The QoS policy has been successfully removed from the "docs" volume on the marketing SVM. +- On the umeng-aff300-05-06 cluster, delete igroup igroupFinNew on the marketing svm +- Expected Response: igroup deleted successfully. --- -### CIFS share Provisioning +### iSCSI -**Create a CIFS share** +- On the umeng-aff300-05-06 cluster, create iscsi service target named alias tgpath on the marketing svm +- Expected Response: The iscsi service has been successfully created. -- On the umeng-aff300-05-06 cluster, create a CIFS share named cifsFin at the path / on the marketing svm +- On the umeng-aff300-05-06 cluster, disable the iscsi service on the marketing svm +- Expected Response: The iscsi service has been successfully updated. -Expected Response: CIFS share created successfully. +- On the umeng-aff300-05-06 cluster, delete the iscsi service on the marketing svm +- Expected Response: The iscsi service has been successfully deleted. -**Update a CIFS share** +- on the umeng-aff300-05-06 cluster, create network interface named cl_mg with ip address 10.63.41.6 and netmask 18 with Default ipspace on node umeng-aff300-06 +- Expected Response: The Network IP interface has been created successfully. -- On the umeng-aff300-05-06 cluster, update the CIFS share named cifsFin. Change it's path to /cifsFin on the marketing svm +- on the umeng-aff300-05-06 cluster, change auto revert to false in cluster scope network interface named cl_mg +- Expected Response: The Network IP interface updated successfully. -Expected Response: CIFS share updated successfully. +- on the umeng-aff300-05-06 cluster, delete svm scope network interface named svg1 in marketing svm +- Expected Response: The Network IP interface deleted successfully. --- -### NFS Export Policy Provisioning - -**Create a NFS Export policy** - -- On the umeng-aff300-05-06 cluster, create NFS export policy as nfsEngPolicy on the marketing svm - -Expected Response: NFS Export Policy created successfully. +### LUNs -**Update a NFS Export policy** - -- On the umeng-aff300-05-06 cluster, update NFS export policy from nfsEngPolicy to nfsMgrPolicy on the marketing svm and client match to 1.1.1.1/32, ro rule to any, rw rule to any. +**Create a LUN** -Expected Response: NFS Export Policy updated successfully. +- On the umeng-aff300-05-06 cluster, create a 20MB lun named lundoc in volume doc on the marketing svm with os type linux +- Expected Response: LUN has been created successfully. ---- +**Resize a LUN** -### NFS Export Policy Rules Provisioning +- On the umeng-aff300-05-06 cluster, update lun lundoc size to 50mb in volume doc on the marketing svm +- Expected Response: LUN has been updated successfully. -**Create a NFS Export policy rule** +**Rename a LUN** -- On the umeng-aff300-05-06 cluster, create NFS export policy rule as client match 0.0.0.0/0, ro rule any, rw rule any in nfsMgrPolicy on the marketing svm +- On the umeng-aff300-05-06 cluster, rename the lun lundoc in volume doc on the marketing svm to lundocnew +- Expected Response: LUN has been updated successfully. -Expected Response: NFS Export Policy Rule created successfully. +**State change of a LUN** -**Update a NFS Export policy rule** +- On the umeng-aff300-05-06 cluster, disable the lun lundocnew in volume doc on the marketing svm +- Expected Response: LUN has been updated successfully. -- On the umeng-aff300-05-06 cluster, update NFS export policy rule for nfsMgrPolicy export policy on the marketing svm ro rule from any to never. +**Delete a LUN** -Expected Response: NFS Export Policy Rules updated successfully. +- On the umeng-aff300-05-06 cluster, delete lun lundocnew in volume doc in marketing svm +- Expected Response: LUN has been deleted successfully. --- -### NFS Export Policy Provisioning +### NFS Export Policies **Create an NFS Export policy** - On the umeng-aff300-05-06 cluster, create an NFS export policy name nfsEngPolicy on the marketing svm - -Expected Response: NFS Export Policy created successfully. +- Expected Response: NFS Export Policy created successfully. **Rename an NFS Export policy** - On the umeng-aff300-05-06 cluster, rename the NFS export policy from nfsEngPolicy to nfsMgrPolicy on the marketing svm. - -Expected Response: NFS Export Policy updated successfully. +- Expected Response: NFS Export Policy updated successfully. --- -### NFS Export Policy Rules Provisioning +### NFS Export Policy Rules **Create an NFS Export policy rule** - On the umeng-aff300-05-06 cluster, create an NFS export policy rule as client match 0.0.0.0/0, ro rule any, rw rule any in nfsMgrPolicy on the marketing svm - -Expected Response: NFS Export Policy Rule created successfully. +- Expected Response: NFS Export Policy Rules created successfully. **Update an NFS Export policy rule** - On the umeng-aff300-05-06 cluster, update the NFS export policy rule for nfsMgrPolicy export policy on the marketing svm ro rule from any to never. - -Expected Response: NFS Export Policy Rules updated successfully. +- Expected Response: NFS Export Policy Rules updated successfully. --- -### List Snapshots +### NVMe -- On the umeng-aff300-05-06 cluster, list the snapshots on volume docs on svm marketing. - -- Expected Response: A list of snapshots for the volume docs on the marketing SVM. - ---- - -### Manage Snapshot Policies +- On the umeng-aff300-05-06 cluster, create nvme service on the marketing svm +- Expected Response: The nvme service has been successfully created. -- On the umeng-aff300-05-06 cluster, create a snapshot policy named every4hours on the gold SVM. The schedule is 4 hours and keeps the last 5 snapshots. +- On the umeng-aff300-05-06 cluster, disable nvme service on the marketing svm +- Expected Response: The nvme service has been successfully updated. -- Expected Response: The snapshot policy "every4hours" has been successfully created on the gold SVM with a schedule of every 4 hours, retaining the last 5 snapshots on the umeng-aff300-05-06 cluster. +- On the umeng-aff300-05-06 cluster, create nvme subsystem sys1 with linux os on the marketing svm +- Expected Response: The nvme subsystem has been successfully created. -- On the umeng-aff300-05-06 cluster, create a snapshot policy named biweekly on the vs_test SVM. The schedule would be 2weekday12_30min and keeps the last 3 snapshots. +- On the umeng-aff300-05-06 cluster, add host nqn as nqn.1992-01.example.com:host1 in sys1 nvme subsystem in marketing svm +- Expected Response: The nvme subsystem Host has been successfully added. -Expected Response if schedule exist: The snapshot policy has been successfully created. -Expected Response if schedule not exist: no schedule 2weekday12_30min found +- On the umeng-aff300-05-06 cluster, delete nvme subsystem sys1 in marketing svm +- Expected Response: The nvme subsystem has been successfully deleted. -- On the umeng-aff300-05-06 cluster, create a snapshot policy named every5min on the vs_test SVM. The schedule is 5 min and keeps the last 2 snapshots. +- On the umeng-aff300-05-06 cluster, create nvme namespace /vol/docns/ns1 with linux os and 20mb size in nvmevs1 svm +- Expected Response: The nvme namespace has been successfully created. -Expected Response: The snapshot policy has been successfully created. +- On the umeng-aff300-05-06 cluster, create subsystem map of sys1 subsystem and /vol/docns/ns1 namespace in nvmevs1 svm +- Expected Response: The nvme subsystem map has been successfully created. --- -### Manage Schedule - -- On the umeng-aff300-05-06 cluster, create a cron schedule with 5 * * * * named as 5minutes +### QoS Policies -Expected Response: The schedule has been successfully created. - -- On the umeng-aff300-05-06 cluster, create a cron schedule with * * 11 1-2 * named as 11dayjantofeb +**List All QoS Policies** -Expected Response: The schedule has been successfully created. +- On the umeng-aff300-05-06 cluster, list all qos policies ---- +- Expected Response: A list of all QoS policies including both SVM-scoped and cluster-scoped policies. -### Manage Qtrees +**List QoS Policies for a Specific SVM** -- On the umeng-aff300-05-06 cluster, create a qtree named staff in docs volume on the marketing SVM +- On the umeng-aff300-05-06 cluster, list all qos policies for the marketing svm -Expected Response: The qtree has been successfully created. +- Expected Response: All QoS policies for the marketing SVM plus all cluster-scoped policies (cluster-scoped policies are always included). -- On the umeng-aff300-05-06 cluster, rename a qtree named staff to pay in docs volume on the marketing SVM +**List Only Fixed QoS Policies** -Expected Response: The qtree has been successfully renamed. +- On the umeng-aff300-05-06 cluster, list all fixed qos policies ---- +- Expected Response: A list of fixed QoS policies including cluster-scoped ones. +**List Only Adaptive QoS Policies** -### Manage NVMe +- On the umeng-aff300-05-06 cluster, list all adaptive qos policies -- On the umeng-aff300-05-06 cluster, create nvme service on the marketing svm +- Expected Response: A list of adaptive QoS policies including cluster-scoped ones. -Expected Response: The nvme service has been successfully created. +**Create a QoS Policy** -- On the umeng-aff300-05-06 cluster, disable nvme service on the marketing svm +- On the umeng-aff300-05-06 cluster, create a fixed QoS policy named gold on the marketing svm with a max throughput of 5000 iops. -Expected Response: The nvme service has been successfully updated. +- Expected Response: The fixed QoS policy "gold" has been successfully created on the marketing SVM with a maximum throughput of 5000 IOPS on the umeng-aff300-05-06 cluster. -- On the umeng-aff300-05-06 cluster, create nvme subsystem sys1 with linux os on the marketing svm +**Apply a Named QoS Policy to a Volume** -Expected Response: The nvme subsystem has been successfully created. +- On the umeng-aff300-05-06 cluster, apply the gold QoS policy to the docs volume on the marketing svm -- On the umeng-aff300-05-06 cluster, add host nqn as nqn.1992-01.example.com:host1 in sys1 nvme subsystem in marketing svm +- Expected Response: The QoS policy "gold" has been successfully applied to the "docs" volume on the marketing SVM. -Expected Response: The nvme subsystem Host has been successfully added. +**Create a Volume with a Named QoS Policy** -- On the umeng-aff300-05-06 cluster, delete nvme subsystem sys1 with in marketing svm +- On the umeng-aff300-05-06 cluster, create a 20MB volume named docs on the marketing svm and the harvest_vc_aggr aggregate with QoS policy gold -Expected Response: The nvme subsystem has been successfully deleted. +- Expected Response: Volume "docs" has been created successfully on the marketing SVM with QoS policy "gold" applied. -- On the umeng-aff300-05-06 cluster, create nvme namespace /vol/docns/ns1 with linux os and 20mb size in nvmevs1 svm +**Create a Volume with Inline QoS Limits** -Expected Response: The nvme namespace has been successfully created. +- On the umeng-aff300-05-06 cluster, create a 20MB volume named docs on the marketing svm and the harvest_vc_aggr aggregate with an inline QoS limit of max_iops 300 -- On the umeng-aff300-05-06 cluster, create subsystem map of sys1 subsystem and /vol/docns/ns1 namespace in nvmevs1 svm +- Expected Response: Volume "docs" has been created successfully with an inline QoS limit of 300 IOPS. -Expected Response: The nvme subsystem map has been successfully created. +**Update Inline QoS Limits on a Volume** ---- +- On the umeng-aff300-05-06 cluster, update the docs volume on the marketing svm setting an inline QoS limit of max_iops 150 -### Manage iSCSI Service +- Expected Response: The inline QoS limit on volume "docs" has been updated to a maximum of 150 IOPS. -- On the umeng-aff300-05-06 cluster, create iscsi service target named alias tgpath on the marketing svm +**Switch a Volume from Inline QoS to a Named QoS Policy** -Expected Response: The iscsi service has been successfully created. +- On the umeng-aff300-05-06 cluster, apply the gold QoS policy to the docs volume on the marketing svm -- On the umeng-aff300-05-06 cluster, disable the iscsi service on the marketing svm +- Expected Response: The QoS policy "gold" has been successfully applied to the "docs" volume, replacing the previous inline QoS limits. -Expected Response: The iscsi service has been successfully updated. +**Switch a Volume from a Named QoS Policy to Inline QoS** -- On the umeng-aff300-05-06 cluster, delete the iscsi service on the marketing svm +- On the umeng-aff300-05-06 cluster, update the docs volume on the marketing svm setting an inline QoS limit of max_iops 500 -Expected Response: The iscsi service has been successfully deleted. +- Expected Response: The inline QoS limit on volume "docs" has been set to a maximum of 500 IOPS. -- on the umeng-aff300-05-06 cluster, create network interface named cl_mg with ip address 10.63.41.6 and netmask 18 with Default ipspace on node umeng-aff300-06 +**Remove Inline QoS Limits from a Volume** -Expected Response: The Network IP interface has been created successfully. +- On the umeng-aff300-05-06 cluster, update the docs volume on the marketing svm and remove the inline QoS limit by setting max_iops to 0 -- on the umeng-aff300-05-06 cluster, change auto revert to false in cluster scope network interface named cl_mg +- Expected Response: The inline QoS limit on volume "docs" has been removed. The volume no longer has a maximum IOPS constraint. -Expected Response: The Network IP interface updated successfully. +**Remove a Named QoS Policy from a Volume** -- on the umeng-aff300-05-06 cluster, delete svm scope network interface named svg1 in marketing svm +- On the umeng-aff300-05-06 cluster, remove the QoS policy from the docs volume on the marketing svm -Expected Response: The Network IP interface deleted successfully. +- Expected Response: The QoS policy has been successfully removed from the "docs" volume on the marketing SVM. --- -### LUN Provisioning - -**Create a LUN** - -- On the umeng-aff300-05-06 cluster, create a 20MB lun named lundoc in volume doc on the marketing svm with os type linux - -Expected Response: LUN has been created successfully. +### Qtrees -**Resize a LUN** +- On the umeng-aff300-05-06 cluster, create a qtree named staff in docs volume on the marketing SVM +- Expected Response: The qtree has been successfully created. -- On the umeng-aff300-05-06 cluster, update lun lundoc size to 50mb in volume doc on the marketing svm +- On the umeng-aff300-05-06 cluster, rename a qtree named staff to pay in docs volume on the marketing SVM +- Expected Response: The qtree has been successfully renamed. -Expected Response: LUN has been updated successfully. +--- -**Rename a LUN** +### Schedules -- On the umeng-aff300-05-06 cluster, rename the lun lundoc in volume doc on the marketing svm to lundocnew +- On the umeng-aff300-05-06 cluster, create a cron schedule with 5 * * * * named as 5minutes +- Expected Response: The schedule has been successfully created. -Expected Response: LUN has been updated successfully. +- On the umeng-aff300-05-06 cluster, create a cron schedule with * * 11 1-2 * named as 11dayjantofeb +- Expected Response: The schedule has been successfully created. -**State change of a LUN** +--- -- On the umeng-aff300-05-06 cluster, disable the lun lundocnew in volume doc on the marketing svm +### Snapshot Policies -Expected Response: LUN has been updated successfully. +- On the umeng-aff300-05-06 cluster, create a snapshot policy named every4hours on the gold SVM. The schedule is 4 hours and keeps the last 5 snapshots. -**Delete a LUN** +- Expected Response: The snapshot policy "every4hours" has been successfully created on the gold SVM with a schedule of every 4 hours, retaining the last 5 snapshots on the umeng-aff300-05-06 cluster. -- On the umeng-aff300-05-06 cluster, delete lun lundocnew in volume doc in marketing svm +- On the umeng-aff300-05-06 cluster, create a snapshot policy named biweekly on the vs_test SVM. The schedule would be 2weekday12_30min and keeps the last 3 snapshots. + - Expected Response if the schedule exists: The snapshot policy has been successfully created. + - Expected Response if the schedule does not exist: no schedule 2weekday12_30min found -Expected Response: LUN has been deleted successfully. +- On the umeng-aff300-05-06 cluster, create a snapshot policy named every5min on the vs_test SVM. The schedule is 5 min and keeps the last 2 snapshots. +- Expected Response: The snapshot policy has been successfully created. --- -### Manage FCP - -- On the umeng-aff300-05-06 cluster, enable fcp service in marketing svm - -Expected Response: The fcp service has been successfully created. - -- On the umeng-aff300-05-06 cluster, create fc interface fc1 in marketing svm at port 0e in node umeng-aff300-01 of fcp data protocol - -Expected Response: The fc interface has been successfully created. +### Snapshots -- On the umeng-aff300-05-06 cluster, delete fc interface fc1 in marketing svm +- On the umeng-aff300-05-06 cluster, create a snapshot named localsnap on the docs volume on the marketing svm. +- Expected Response: Snapshot created successfully -Expected Response: The fc interface has been successfully deleted. +- On the umeng-aff300-05-06 cluster, restore the docs volume from a snapshot named localsnap on the marketing svm. +- Expected Response: Snapshot restored successfully -- On the umeng-aff300-05-06 cluster, update fcp service to disable on the marketing svm +- On the umeng-aff300-05-06 cluster, delete the localsnap snapshot on the docs volume on the marketing svm. +- Expected Response: Snapshot deleted successfully -Expected Response: The fcp service has been successfully updated. +- On the umeng-aff300-05-06 cluster, list the snapshots on the docs volume on svm marketing. +- Expected Response: A list of snapshots for the volume docs on the marketing SVM. --- -### SVM Provisioning +### SVMs -**Create a SVM** +**Create an SVM** - On the umeng-aff300-05-06 cluster, create marketing svm +- Expected Response: SVM created successfully. -Expected Response: SVM created successfully. - -**Rename a SVM** +**Rename an SVM** - On the umeng-aff300-05-06 cluster, rename svm marketing to marketingNew +- Expected Response: SVM updated successfully. -Expected Response: SVM updated successfully. - -**Update a SVM** +**Update an SVM** - On the umeng-aff300-05-06 cluster, update svm marketingNew state to stopped and comment as `stop_svm` +- Expected Response: SVM updated successfully. -Expected Response: SVM updated successfully. - -**Delete a SVM** +**Delete an SVM** - On the umeng-aff300-05-06 cluster, delete marketingNew svm - -Expected Response: SVM deleted successfully. +- Expected Response: SVM deleted successfully. --- -### Querying Specific Fields - -**Get volume space and protection details** - -- On the umeng-aff300-05-06 cluster, for every volume on the marketing svm, show me the name, junction path, used size, available size, and snapshot policy. - -Expected Response: A table of volumes with their junction paths, used/available space, and assigned snapshot policies on the marketing SVM. - -**Check whether it is safe to extend a volume** - -- On the umeng-aff300-05-06 cluster, can I safely grow the docs volume on the marketing svm by 10GB? Check the aggregate's available space first. - -Expected Response: A summary of aggregate free space, followed by a recommendation on whether it is safe to proceed with the resize. +### Volumes ---- - -### Manage iGroups (SAN) - -**Create an iGroup** - -- On the umeng-aff300-05-06 cluster, create an igroup named igroupFin with OS type linux and protocol iscsi on the marketing svm +**Create a Volume** -Expected Response: igroup created successfully. +- On the umeng-aff300-05-06 cluster, create a 20MB volume named docs on the marketing svm and the harvest_vc_aggr aggregate +- Expected Response: Volume "docs" has been created successfully on the umeng-aff300-05-06 cluster with 20MB size on the marketing SVM using the harvest_vc_aggr aggregate. -- On the umeng-aff300-05-06 cluster, create lun map of lun named /vol/docs/lunpayroll and an igroup named igroupFin on the marketing svm +**Resize a Volume** -Expected Response: lun map created successfully. +- On the umeng-aff300-05-06 cluster, resize the docs volume on the marketing svm to 25MB. -- On the umeng-aff300-05-06 cluster, delete lun map of lun named /vol/docs/lunpayroll and an igroup named igroupFin on the marketing svm +- Expected Response: Volume "docs" has been successfully resized to 25MB on the marketing SVM. -Expected Response: lun map deleted successfully. +- On the umeng-aff300-05-06 cluster, increase the size of the docs volume on the marketing svm by 5MB. -**Rename an iGroup** +- Expected Response: Volume "docs" has been successfully increased by 5MB to 30MB on the marketing SVM. -- On the umeng-aff300-05-06 cluster, rename igroup igroupFin to igroupFinNew and os type as windows on the marketing svm +**Volume Autogrowth** -Expected Response: igroup updated successfully. +- On the umeng-aff300-05-06 cluster, enable autogrowth on the docs volume in the marketing svm -**Add an Initiator to an iGroup** +- Expected Response: Autogrowth has been enabled on the "docs" volume in the marketing SVM on the umeng-aff300-05-06 cluster. -- On the umeng-aff300-05-06 cluster, add initiator iqn.2021-01.com.example:test to igroup igroupFinNew on the marketing svm +- Increase the docs volume autogrow percent to 62 -Expected Response: initiator added to igroup successfully. +- Expected Response: The autogrow threshold for the docs volume has been successfully increased to 62%. The volume will now automatically grow when it reaches 62% capacity. -**Remove an Initiator from an iGroup** +--- -- On the umeng-aff300-05-06 cluster, remove initiator iqn.2021-01.com.example:test from igroup igroupFinNew on the marketing svm +### Querying Specific Fields -Expected Response: initiator removed from igroup successfully. +**Get volume space and protection details** -**Delete an iGroup** +- On the umeng-aff300-05-06 cluster, for every volume on the marketing svm, show me the name, junction path, used size, available size, and snapshot policy. +- Expected Response: A table of volumes with their junction paths, used/available space, and assigned snapshot policies on the marketing SVM. -- On the umeng-aff300-05-06 cluster, delete igroup igroupFinNew on the marketing svm +**Check whether it is safe to extend a volume** -Expected Response: igroup deleted successfully. +- On the umeng-aff300-05-06 cluster, can I safely grow the docs volume on the marketing svm by 10GB? Check the aggregate's available space first. +- Expected Response: A summary of aggregate free space, followed by a recommendation on whether it is safe to proceed with the resize. --- diff --git a/docs/tools.md b/docs/tools.md index 691557e..9d032e7 100644 --- a/docs/tools.md +++ b/docs/tools.md @@ -2,39 +2,122 @@ The following tools are provided by the ONTAP MCP server. +ONTAP MCP provides a set of tools that can be used to interact with the ONTAP API. These tools are designed to help users discover and manage their ONTAP clusters more efficiently. The tools are categorized based on their functionality, such as API discovery, volume management, data protection, CIFS/SMB integration, NFS export policy management, performance management, SVM management, qtree management, network interface management, LUN and igroup management, iSCSI management, FCP management, NVMe management, and multi-cluster management. + +All ONTAP MCP tools are annotated with hint metadata: `readOnlyHint`, `idempotentHint`, and `destructiveHint`. The `readOnlyHint` indicates that the tool does not modify any data and is safe to use for discovery and information retrieval. The `destructiveHint` indicates that the tool performs actions that can modify or delete data, and should be used with caution. + +If you want to run the ONTAP MCP server in read-only mode, you can start the server with the `--read-only` flag. In this mode, only tools with the `readOnlyHint` will be available for use, ensuring that no modifications can be made to the ONTAP cluster. See the [configuration documentation](install.md#configuration) for more details on how to start the server in read-only mode. + +## API Discovery + +- `list_ontap_endpoints` (available when the API catalog is loaded) +- `search_ontap_endpoints` (available when the API catalog is loaded) +- `describe_ontap_endpoint` (available when the API catalog is loaded) +- `ontap_get` + ## Volume Management -- Volume lifecycle management: create, read, update, delete, resize -- Volume autogrowth: enable, disable, status -- Volume updates (multiple properties in a single operation) -- QoS policy and snapshot policy assignment -- NFS access control with export policies -- Volume details: capacity, usage +- `create_volume` +- `update_volume` +- `delete_volume` ## Data Protection -- Snapshot policies with flexible scheduling -- Snapshot schedules -- Policy application to SVMs +- `create_snapshot` +- `delete_snapshot` +- `restore_snapshot` +- `create_snapshot_policy` +- `update_snapshot_policy` +- `delete_snapshot_policy` +- `create_schedule` +- `add_schedule_in_snapshot_policy` +- `update_schedule_in_snapshot_policy` +- `remove_schedule_in_snapshot_policy` ## CIFS/SMB Integration -- CIFS list management: create, read, update, delete -- Integration with volume provisioning +- `create_cifs_share` +- `update_cifs_share` +- `delete_cifs_share` ## NFS Export Policy Management -- Export policy management: create, read, update, delete -- Volume-to-policy association +- `create_nfs_export_policies` +- `update_nfs_export_policies` +- `delete_nfs_export_policies` +- `create_nfs_export_policies_rules` +- `update_nfs_export_policies_rules` +- `delete_nfs_export_policies_rules` ## Performance Management -- QoS policy management: create, read, update, delete -- QoS policy assignment to SVMs -- Fixed QoS policies with IOPS/bandwidth limits -- Adaptive QoS policies with dynamic scaling +- `list_qos_policies` +- `create_qos_policy` +- `update_qos_policy` +- `delete_qos_policy` + +## SVM Management + +- `create_svm` +- `update_svm` +- `delete_svm` + +## Qtree Management + +- `create_qtree` +- `update_qtree` +- `delete_qtree` + +## Network Interface Management + +- `create_network_ip_interface` +- `update_network_ip_interface` +- `delete_network_ip_interface` + +## LUN and igroup Management + +- `create_lun` +- `update_lun` +- `delete_lun` +- `create_igroup` +- `update_igroup` +- `delete_igroup` +- `add_igroup_initiator` +- `remove_igroup_initiator` +- `create_lun_map` +- `delete_lun_map` + +## iSCSI Management + +- `create_iscsi_service` +- `update_iscsi_service` +- `delete_iscsi_service` + +## FCP Management + +- `create_fcp_service` +- `update_fcp_service` +- `delete_fcp_service` +- `create_fc_interface` +- `update_fc_interface` +- `delete_fc_interface` + +## NVMe Management + +- `create_nvme_service` +- `update_nvme_service` +- `delete_nvme_service` +- `create_nvme_subsystem` +- `update_nvme_subsystem` +- `delete_nvme_subsystem` +- `add_nvme_subsystem_host` +- `remove_nvme_subsystem_host` +- `create_nvme_namespace` +- `update_nvme_namespace` +- `delete_nvme_namespace` +- `create_nvme_subsystem_map` +- `delete_nvme_subsystem_map` ## Multi-Cluster Management -- Unified management of multiple ONTAP clusters -- Centralized credential management \ No newline at end of file +- `list_registered_clusters` \ No newline at end of file diff --git a/integration/test/snapshotPolicy_test.go b/integration/test/snapshot_policy_test.go similarity index 100% rename from integration/test/snapshotPolicy_test.go rename to integration/test/snapshot_policy_test.go