mirror of
https://github.com/ansible-collections/community.general.git
synced 2026-02-04 07:51:50 +00:00
[PR #11049/396f467b backport][stable-12] Improve Python code: address unused variables (#11058)
Improve Python code: address unused variables (#11049)
* Address F841 (unused variable).
* Reformat.
* Add changelog fragment.
* More cleanup.
* Remove trailing whitespace.
* Readd removed code as a comment with TODO.
(cherry picked from commit 396f467bbb)
Co-authored-by: Felix Fontein <felix@fontein.de>
This commit is contained in:
parent
1eca76969a
commit
8cd80d94a0
90 changed files with 232 additions and 235 deletions
62
changelogs/fragments/11049-ruff-check.yml
Normal file
62
changelogs/fragments/11049-ruff-check.yml
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
bugfixes:
|
||||
- elastic callback plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- logentries callback plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- opentelemetry callback plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- syslog_json callback plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- to_prettytable filter plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- online inventory plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- xen_orchestra inventory plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- chef_databag lookup plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- consul_kv lookup plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- btrfs module utils - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- gitlab module utils - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- redfish_utils module utils - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- scaleway module utils - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- xenserver module utils - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- aerospike_migrations - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- aix_lvol - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- ali_instance - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- apt_rpm - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- btrfs_subvolume - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- discord - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- dpkg_divert - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- gitlab_branch - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- gitlab_group_members - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- gitlab_project_members - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- gitlab_protected_branch - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- infinity - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- interfaces_file - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- ipa_group - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- ipa_vault - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- ipmi_boot - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- jenkins_build - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- jenkins_build_info - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- jenkins_plugin - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- keycloak_component - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- keycloak_realm_key - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- keycloak_userprofile - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- launchd - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- listen_ports_facts - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- manageiq_alert_profiles - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- manageiq_provider - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- manageiq_tenant - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- memset_memstore_info - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- memset_server_info - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- memset_zone - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- nosh - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- odbc - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- one_service - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- one_vm - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- opendj_backendprop - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- ovh_monthly_billing - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- portinstall - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- redhat_subscription - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- redis_data_incr - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- scaleway_sshkey - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- spectrum_model_attrs - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- spotinst_aws_elastigroup - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- svc - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- vmadm - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- xenserver_guest - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- xfs_quota - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
- xml - improve Python code by removing unnecessary variables (https://github.com/ansible-collections/community.general/pull/11049).
|
||||
|
|
@ -145,7 +145,7 @@ class ElasticSource:
|
|||
self.host = socket.gethostname()
|
||||
try:
|
||||
self.ip_address = socket.gethostbyname(socket.gethostname())
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
self.ip_address = None
|
||||
self.user = getpass.getuser()
|
||||
|
||||
|
|
|
|||
|
|
@ -254,7 +254,7 @@ class CallbackModule(CallbackBase):
|
|||
|
||||
try:
|
||||
self.token = self.get_option("token")
|
||||
except KeyError as e:
|
||||
except KeyError:
|
||||
self._display.warning(
|
||||
"Logentries token was not provided, this is required for this callback to operate, disabling"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -210,7 +210,7 @@ class OpenTelemetrySource:
|
|||
self.host = socket.gethostname()
|
||||
try:
|
||||
self.ip_address = socket.gethostbyname(socket.gethostname())
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
self.ip_address = None
|
||||
self.user = getpass.getuser()
|
||||
|
||||
|
|
|
|||
|
|
@ -129,7 +129,7 @@ class CallbackModule(CallbackBase):
|
|||
def v2_runner_on_async_failed(self, result):
|
||||
res = result._result
|
||||
host = result._host.get_name()
|
||||
jid = result._result.get("ansible_job_id")
|
||||
# jid = result._result.get("ansible_job_id")
|
||||
self.logger.error(
|
||||
"%s ansible-command: task execution FAILED; host: %s; message: %s",
|
||||
self.hostname,
|
||||
|
|
|
|||
|
|
@ -229,7 +229,6 @@ def _configure_alignments(table, field_names, column_alignments):
|
|||
field_names: List of field names to align
|
||||
column_alignments: Dict of column alignments
|
||||
"""
|
||||
valid_alignments = {"left", "center", "right", "l", "c", "r"}
|
||||
|
||||
if not isinstance(column_alignments, dict):
|
||||
return
|
||||
|
|
|
|||
|
|
@ -135,7 +135,7 @@ class InventoryModule(BaseInventoryPlugin):
|
|||
def _fetch_information(self, url):
|
||||
try:
|
||||
response = open_url(url, headers=self.headers)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
self.display.warning(f"An error happened while fetching: {url}")
|
||||
return None
|
||||
|
||||
|
|
|
|||
|
|
@ -112,7 +112,7 @@ try:
|
|||
|
||||
if LooseVersion(websocket.__version__) <= LooseVersion("1.0.0"):
|
||||
raise ImportError
|
||||
except ImportError as e:
|
||||
except ImportError:
|
||||
HAS_WEBSOCKET = False
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ try:
|
|||
import chef
|
||||
|
||||
HAS_CHEF = True
|
||||
except ImportError as missing_module:
|
||||
except ImportError:
|
||||
HAS_CHEF = False
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -120,7 +120,7 @@ try:
|
|||
import consul
|
||||
|
||||
HAS_CONSUL = True
|
||||
except ImportError as e:
|
||||
except ImportError:
|
||||
HAS_CONSUL = False
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -86,19 +86,19 @@ class BtrfsCommands:
|
|||
|
||||
def subvolume_set_default(self, filesystem_path, subvolume_id):
|
||||
command = [self.__btrfs, "subvolume", "set-default", str(subvolume_id), to_bytes(filesystem_path)]
|
||||
result = self.__module.run_command(command, check_rc=True)
|
||||
self.__module.run_command(command, check_rc=True)
|
||||
|
||||
def subvolume_create(self, subvolume_path):
|
||||
command = [self.__btrfs, "subvolume", "create", to_bytes(subvolume_path)]
|
||||
result = self.__module.run_command(command, check_rc=True)
|
||||
self.__module.run_command(command, check_rc=True)
|
||||
|
||||
def subvolume_snapshot(self, snapshot_source, snapshot_destination):
|
||||
command = [self.__btrfs, "subvolume", "snapshot", to_bytes(snapshot_source), to_bytes(snapshot_destination)]
|
||||
result = self.__module.run_command(command, check_rc=True)
|
||||
self.__module.run_command(command, check_rc=True)
|
||||
|
||||
def subvolume_delete(self, subvolume_path):
|
||||
command = [self.__btrfs, "subvolume", "delete", to_bytes(subvolume_path)]
|
||||
result = self.__module.run_command(command, check_rc=True)
|
||||
self.__module.run_command(command, check_rc=True)
|
||||
|
||||
|
||||
class BtrfsInfoProvider:
|
||||
|
|
|
|||
|
|
@ -57,11 +57,11 @@ def auth_argument_spec(spec=None):
|
|||
def find_project(gitlab_instance, identifier):
|
||||
try:
|
||||
project = gitlab_instance.projects.get(identifier)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
current_user = gitlab_instance.user
|
||||
try:
|
||||
project = gitlab_instance.projects.get(f"{current_user.username}/{identifier}")
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
return project
|
||||
|
|
@ -70,7 +70,7 @@ def find_project(gitlab_instance, identifier):
|
|||
def find_group(gitlab_instance, identifier):
|
||||
try:
|
||||
group = gitlab_instance.groups.get(identifier)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
return group
|
||||
|
|
|
|||
|
|
@ -186,7 +186,7 @@ class RedfishUtils:
|
|||
)
|
||||
try:
|
||||
data = json.loads(to_native(resp.read()))
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
# No response data; this is okay in certain cases
|
||||
data = None
|
||||
if not allow_no_resp:
|
||||
|
|
@ -233,7 +233,7 @@ class RedfishUtils:
|
|||
)
|
||||
try:
|
||||
data = json.loads(to_native(resp.read()))
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
# No response data; this is okay in many cases
|
||||
data = None
|
||||
except HTTPError as e:
|
||||
|
|
@ -1991,7 +1991,7 @@ class RedfishUtils:
|
|||
try:
|
||||
with open(image_file, "rb") as f:
|
||||
image_payload = f.read()
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
return {"ret": False, "msg": f"Could not read file {image_file}"}
|
||||
|
||||
# Check that multipart HTTP push updates are supported
|
||||
|
|
@ -2409,7 +2409,7 @@ class RedfishUtils:
|
|||
return {"ret": False, "msg": "Key BootOrder not found"}
|
||||
|
||||
boot = data["Boot"]
|
||||
boot_order = boot["BootOrder"]
|
||||
# boot_order = boot["BootOrder"] - TODO is this needed?
|
||||
boot_options_dict = self._get_boot_options_dict(boot)
|
||||
|
||||
# Verify the requested boot options are valid
|
||||
|
|
@ -3742,7 +3742,7 @@ class RedfishUtils:
|
|||
response = self.get_request(f"{self.root_uri}/redfish/v1/Managers/{manager}", override_headers=None)
|
||||
try:
|
||||
result["service_identification"] = response["data"]["ServiceIdentification"]
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
self.module.fail_json(msg=f"Service ID not found for manager {manager}")
|
||||
result["ret"] = True
|
||||
return result
|
||||
|
|
@ -3826,7 +3826,6 @@ class RedfishUtils:
|
|||
|
||||
def get_hpe_thermal_config(self):
|
||||
result = {}
|
||||
key = "Thermal"
|
||||
# Go through list
|
||||
for chassis_uri in self.chassis_uris:
|
||||
response = self.get_request(self.root_uri + chassis_uri)
|
||||
|
|
@ -3840,8 +3839,6 @@ class RedfishUtils:
|
|||
return {"ret": False}
|
||||
|
||||
def get_hpe_fan_percent_min(self):
|
||||
result = {}
|
||||
key = "Thermal"
|
||||
# Go through list
|
||||
for chassis_uri in self.chassis_uris:
|
||||
response = self.get_request(self.root_uri + chassis_uri)
|
||||
|
|
@ -3946,17 +3943,6 @@ class RedfishUtils:
|
|||
|
||||
# Validate input parameters
|
||||
required_parameters = ["RAIDType", "Drives"]
|
||||
allowed_parameters = [
|
||||
"CapacityBytes",
|
||||
"DisplayName",
|
||||
"InitializeMethod",
|
||||
"MediaSpanCount",
|
||||
"Name",
|
||||
"ReadCachePolicy",
|
||||
"StripSizeBytes",
|
||||
"VolumeUsage",
|
||||
"WriteCachePolicy",
|
||||
]
|
||||
|
||||
for parameter in required_parameters:
|
||||
if not volume_details.get(parameter):
|
||||
|
|
@ -4029,7 +4015,6 @@ class RedfishUtils:
|
|||
reg_data = reg_resp["data"]
|
||||
|
||||
# Get BIOS attribute registry URI
|
||||
lst = []
|
||||
|
||||
# Get the location URI
|
||||
response = self.check_location_uri(reg_data, reg_uri)
|
||||
|
|
|
|||
|
|
@ -268,7 +268,6 @@ class Scaleway:
|
|||
def fetch_paginated_resources(self, resource_key, **pagination_kwargs):
|
||||
response = self.get(path=self.api_path, params=pagination_kwargs)
|
||||
|
||||
status_code = response.status_code
|
||||
if not response.ok:
|
||||
self.module.fail_json(
|
||||
msg=f"Error getting {resource_key} [{response.status_code}: {response.json['message']}]"
|
||||
|
|
|
|||
|
|
@ -289,7 +289,7 @@ def get_object_ref(module, name, uuid=None, obj_type="VM", fail=True, msg_prefix
|
|||
# Find object by UUID. If no object is found using given UUID,
|
||||
# an exception will be generated.
|
||||
obj_ref = xapi_session.xenapi_request(f"{real_obj_type}.get_by_uuid", (uuid,))
|
||||
except XenAPI.Failure as f:
|
||||
except XenAPI.Failure:
|
||||
if fail:
|
||||
module.fail_json(msg=f"{msg_prefix}{obj_type} with UUID '{uuid}' not found!")
|
||||
elif name:
|
||||
|
|
|
|||
|
|
@ -157,7 +157,7 @@ try:
|
|||
import aerospike
|
||||
from time import sleep
|
||||
import re
|
||||
except ImportError as ie:
|
||||
except ImportError:
|
||||
LIB_FOUND = False
|
||||
LIB_FOUND_ERR = traceback.format_exc()
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -272,8 +272,6 @@ def main():
|
|||
if state == "absent":
|
||||
module.exit_json(changed=False, msg=f"Logical Volume {lv} does not exist.")
|
||||
|
||||
changed = False
|
||||
|
||||
this_lv = parse_lv(lv_info)
|
||||
|
||||
if state == "present" and not size:
|
||||
|
|
|
|||
|
|
@ -654,7 +654,7 @@ def run_instance(module, ecs, exact_count):
|
|||
system_disk_size = module.params["system_disk_size"]
|
||||
system_disk_name = module.params["system_disk_name"]
|
||||
system_disk_description = module.params["system_disk_description"]
|
||||
allocate_public_ip = module.params["allocate_public_ip"]
|
||||
# allocate_public_ip = module.params["allocate_public_ip"] TODO - this is unused!
|
||||
period = module.params["period"]
|
||||
auto_renew = module.params["auto_renew"]
|
||||
instance_charge_type = module.params["instance_charge_type"]
|
||||
|
|
|
|||
|
|
@ -155,7 +155,7 @@ def local_rpm_package_name(path):
|
|||
fd = os.open(path, os.O_RDONLY)
|
||||
try:
|
||||
header = ts.hdrFromFdno(fd)
|
||||
except rpm.error as e:
|
||||
except rpm.error:
|
||||
return None
|
||||
finally:
|
||||
os.close(fd)
|
||||
|
|
|
|||
|
|
@ -593,7 +593,7 @@ class BtrfsSubvolumeModule:
|
|||
|
||||
mount = self.module.get_bin_path("mount", required=True)
|
||||
command = [mount, "-o", f"noatime,subvolid={int(subvolid)}", device, mountpoint]
|
||||
result = self.module.run_command(command, check_rc=True)
|
||||
self.module.run_command(command, check_rc=True)
|
||||
|
||||
return mountpoint
|
||||
|
||||
|
|
|
|||
|
|
@ -183,11 +183,6 @@ def main():
|
|||
supports_check_mode=True,
|
||||
)
|
||||
|
||||
result = dict(
|
||||
changed=False,
|
||||
http_code="",
|
||||
)
|
||||
|
||||
if module.check_mode:
|
||||
response, info = discord_check_mode(module)
|
||||
if info["status"] != 200:
|
||||
|
|
|
|||
|
|
@ -332,7 +332,7 @@ def main():
|
|||
if os.path.exists(b_old) and not os.path.exists(b_new):
|
||||
try:
|
||||
os.rename(b_old, b_new)
|
||||
except OSError as e:
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if not module.check_mode:
|
||||
|
|
|
|||
|
|
@ -95,13 +95,13 @@ class GitlabBranch:
|
|||
def get_project(self, project):
|
||||
try:
|
||||
return self.repo.projects.get(project)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def get_branch(self, branch):
|
||||
try:
|
||||
return self.project.branches.get(branch)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def create_branch(self, branch, ref_branch):
|
||||
|
|
@ -171,7 +171,7 @@ def main():
|
|||
try:
|
||||
this_gitlab.delete_branch(this_branch)
|
||||
module.exit_json(changed=True, msg=f"Branch {branch} deleted.")
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
module.fail_json(msg="Error delete branch.", exception=traceback.format_exc())
|
||||
else:
|
||||
module.exit_json(changed=False, msg="No changes are needed.")
|
||||
|
|
|
|||
|
|
@ -197,7 +197,7 @@ class GitLabGroup:
|
|||
member = group.members.get(gitlab_user_id)
|
||||
if member:
|
||||
return member
|
||||
except gitlab.exceptions.GitlabGetError as e:
|
||||
except gitlab.exceptions.GitlabGetError:
|
||||
return None
|
||||
|
||||
# check if the user is a member of the group
|
||||
|
|
@ -210,7 +210,7 @@ class GitLabGroup:
|
|||
# add user to a group
|
||||
def add_member_to_group(self, gitlab_user_id, gitlab_group_id, access_level):
|
||||
group = self._gitlab.groups.get(gitlab_group_id)
|
||||
add_member = group.members.create({"user_id": gitlab_user_id, "access_level": access_level})
|
||||
group.members.create({"user_id": gitlab_user_id, "access_level": access_level})
|
||||
|
||||
# remove user from a group
|
||||
def remove_user_from_group(self, gitlab_user_id, gitlab_group_id):
|
||||
|
|
|
|||
|
|
@ -177,7 +177,7 @@ class GitLabProjectMembers:
|
|||
try:
|
||||
project_exists = self._gitlab.projects.get(project_name)
|
||||
return project_exists.id
|
||||
except gitlab.exceptions.GitlabGetError as e:
|
||||
except gitlab.exceptions.GitlabGetError:
|
||||
project_exists = self._gitlab.projects.list(search=project_name, all=False)
|
||||
if project_exists:
|
||||
return project_exists[0].id
|
||||
|
|
@ -200,7 +200,7 @@ class GitLabProjectMembers:
|
|||
member = project.members.get(gitlab_user_id)
|
||||
if member:
|
||||
return member
|
||||
except gitlab.exceptions.GitlabGetError as e:
|
||||
except gitlab.exceptions.GitlabGetError:
|
||||
return None
|
||||
|
||||
# check if the user is a member of the project
|
||||
|
|
@ -213,7 +213,7 @@ class GitLabProjectMembers:
|
|||
# add user to a project
|
||||
def add_member_to_project(self, gitlab_user_id, gitlab_project_id, access_level):
|
||||
project = self._gitlab.projects.get(gitlab_project_id)
|
||||
add_member = project.members.create({"user_id": gitlab_user_id, "access_level": access_level})
|
||||
project.members.create({"user_id": gitlab_user_id, "access_level": access_level})
|
||||
|
||||
# remove user from a project
|
||||
def remove_user_from_project(self, gitlab_user_id, gitlab_project_id):
|
||||
|
|
|
|||
|
|
@ -112,7 +112,7 @@ class GitlabProtectedBranch:
|
|||
def protected_branch_exist(self, name):
|
||||
try:
|
||||
return self.project.protectedbranches.get(name)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def create_or_update_protected_branch(self, name, options):
|
||||
|
|
|
|||
|
|
@ -443,7 +443,6 @@ class Infinity:
|
|||
add a new LAN network into a given supernet Fusionlayer Infinity via rest api or default supernet
|
||||
required fields=['network_name', 'network_family', 'network_type', 'network_address','network_size' ]
|
||||
"""
|
||||
method = "post"
|
||||
resource_url = "networks"
|
||||
response = None
|
||||
if network_name is None or network_address is None or network_size is None:
|
||||
|
|
|
|||
|
|
@ -172,9 +172,6 @@ def optionDict(line, iface, option, value, address_family):
|
|||
|
||||
|
||||
def getValueFromLine(s):
|
||||
spaceRe = re.compile(r"\s+")
|
||||
m = list(spaceRe.finditer(s))[-1]
|
||||
valueEnd = m.start()
|
||||
option = s.split()[0]
|
||||
optionStart = s.find(option)
|
||||
optionLen = len(option)
|
||||
|
|
|
|||
|
|
@ -230,7 +230,6 @@ def get_group_dict(description=None, external=None, gid=None, nonposix=None):
|
|||
|
||||
|
||||
def get_group_diff(client, ipa_group, module_group):
|
||||
data = []
|
||||
# With group_add attribute nonposix is passed, whereas with group_mod only posix can be passed.
|
||||
if "nonposix" in module_group:
|
||||
# Only non-posix groups can be changed to posix
|
||||
|
|
|
|||
|
|
@ -181,7 +181,7 @@ def get_vault_diff(client, ipa_vault, module_vault, module):
|
|||
def ensure(module, client):
|
||||
state = module.params["state"]
|
||||
name = module.params["cn"]
|
||||
user = module.params["username"]
|
||||
# user = module.params["username"] TODO is this really not needed?
|
||||
replace = module.params["replace"]
|
||||
|
||||
module_vault = get_vault_dict(
|
||||
|
|
|
|||
|
|
@ -176,7 +176,7 @@ def main():
|
|||
key = binascii.unhexlify(module.params["key"])
|
||||
else:
|
||||
key = None
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to convert 'key' from hex string.")
|
||||
|
||||
# --- run command ---
|
||||
|
|
|
|||
|
|
@ -212,7 +212,7 @@ class JenkinsBuild:
|
|||
try:
|
||||
response = self.server.get_build_info(self.name, self.build_number)
|
||||
return response
|
||||
except jenkins.JenkinsException as e:
|
||||
except jenkins.JenkinsException:
|
||||
response = {}
|
||||
response["result"] = "ABSENT"
|
||||
return response
|
||||
|
|
|
|||
|
|
@ -153,7 +153,7 @@ class JenkinsBuildInfo:
|
|||
self.build_number = job_info["lastBuild"]["number"]
|
||||
|
||||
return self.server.get_build_info(self.name, self.build_number)
|
||||
except jenkins.JenkinsException as e:
|
||||
except jenkins.JenkinsException:
|
||||
response = {}
|
||||
response["result"] = "ABSENT"
|
||||
return response
|
||||
|
|
|
|||
|
|
@ -552,7 +552,7 @@ class JenkinsPlugin:
|
|||
data = urlencode(script_data)
|
||||
|
||||
# Send the installation request
|
||||
r = self._get_url_data(
|
||||
self._get_url_data(
|
||||
f"{self.url}/scriptText",
|
||||
msg_status="Cannot install plugin.",
|
||||
msg_exception="Plugin installation has failed.",
|
||||
|
|
|
|||
|
|
@ -227,10 +227,7 @@ def main():
|
|||
|
||||
# Make it easier to refer to current module parameters
|
||||
name = module.params.get("name")
|
||||
force = module.params.get("force")
|
||||
state = module.params.get("state")
|
||||
enabled = module.params.get("enabled")
|
||||
provider_id = module.params.get("provider_id")
|
||||
provider_type = module.params.get("provider_type")
|
||||
parent_id = module.params.get("parent_id")
|
||||
|
||||
|
|
|
|||
|
|
@ -367,8 +367,6 @@ def main():
|
|||
name = module.params.get("name")
|
||||
force = module.params.get("force")
|
||||
state = module.params.get("state")
|
||||
enabled = module.params.get("enabled")
|
||||
provider_id = module.params.get("provider_id")
|
||||
parent_id = module.params.get("parent_id")
|
||||
|
||||
# Get a list of all Keycloak components that are of keyprovider type.
|
||||
|
|
|
|||
|
|
@ -627,7 +627,6 @@ def main():
|
|||
|
||||
# Make it easier to refer to current module parameters
|
||||
state = module.params.get("state")
|
||||
enabled = module.params.get("enabled")
|
||||
parent_id = module.params.get("parent_id")
|
||||
provider_type = module.params.get("provider_type")
|
||||
provider_id = module.params.get("provider_id")
|
||||
|
|
|
|||
|
|
@ -470,8 +470,7 @@ def main():
|
|||
service = module.params["name"]
|
||||
plist_filename = module.params["plist"]
|
||||
action = module.params["state"]
|
||||
rc = 0
|
||||
out = err = ""
|
||||
err = ""
|
||||
result = {
|
||||
"name": service,
|
||||
"changed": False,
|
||||
|
|
|
|||
|
|
@ -226,7 +226,7 @@ def netStatParse(raw):
|
|||
pid_and_name = ""
|
||||
process = ""
|
||||
formatted_line = line.split()
|
||||
protocol, recv_q, send_q, address, foreign_address, rest = (
|
||||
protocol, _recv_q, _send_q, address, foreign_address, rest = (
|
||||
formatted_line[0],
|
||||
formatted_line[1],
|
||||
formatted_line[2],
|
||||
|
|
|
|||
|
|
@ -218,7 +218,7 @@ class ManageIQAlertProfiles:
|
|||
# if we have any updated values
|
||||
changed = True
|
||||
try:
|
||||
result = self.client.post(old_profile["href"], resource=profile_dict, action="edit")
|
||||
self.client.post(old_profile["href"], resource=profile_dict, action="edit")
|
||||
except Exception as e:
|
||||
msg = "Updating profile '{name}' failed: {error}"
|
||||
msg = msg.format(name=old_profile["name"], error=e)
|
||||
|
|
|
|||
|
|
@ -808,7 +808,7 @@ class ManageIQProvider:
|
|||
"""
|
||||
try:
|
||||
url = f"{self.api_url}/providers/{provider['id']}"
|
||||
result = self.client.post(url, action="refresh")
|
||||
self.client.post(url, action="refresh")
|
||||
except Exception as e:
|
||||
self.module.fail_json(msg=f"failed to refresh provider {name}: {e}")
|
||||
|
||||
|
|
|
|||
|
|
@ -272,7 +272,7 @@ class ManageIQTenant:
|
|||
|
||||
# try to update tenant
|
||||
try:
|
||||
result = self.client.post(tenant["href"], action="edit", resource=resource)
|
||||
self.client.post(tenant["href"], action="edit", resource=resource)
|
||||
except Exception as e:
|
||||
self.module.fail_json(msg=f"failed to update tenant {tenant['name']}: {e}")
|
||||
|
||||
|
|
|
|||
|
|
@ -115,7 +115,6 @@ def get_facts(args=None):
|
|||
"""
|
||||
retvals, payload = dict(), dict()
|
||||
has_changed, has_failed = False, False
|
||||
msg, stderr, memset_api = None, None, None
|
||||
|
||||
payload["name"] = args["name"]
|
||||
|
||||
|
|
@ -135,14 +134,12 @@ def get_facts(args=None):
|
|||
return retvals
|
||||
|
||||
# we don't want to return the same thing twice
|
||||
msg = None
|
||||
memset_api = response.json()
|
||||
|
||||
retvals["changed"] = has_changed
|
||||
retvals["failed"] = has_failed
|
||||
for val in ["msg", "memset_api"]:
|
||||
if val is not None:
|
||||
retvals[val] = eval(val)
|
||||
retvals["msg"] = None
|
||||
retvals["memset_api"] = memset_api
|
||||
|
||||
return retvals
|
||||
|
||||
|
|
|
|||
|
|
@ -246,7 +246,6 @@ def get_facts(args=None):
|
|||
"""
|
||||
retvals, payload = dict(), dict()
|
||||
has_changed, has_failed = False, False
|
||||
msg, stderr, memset_api = None, None, None
|
||||
|
||||
payload["name"] = args["name"]
|
||||
|
||||
|
|
@ -266,14 +265,12 @@ def get_facts(args=None):
|
|||
return retvals
|
||||
|
||||
# we don't want to return the same thing twice
|
||||
msg = None
|
||||
memset_api = response.json()
|
||||
|
||||
retvals["changed"] = has_changed
|
||||
retvals["failed"] = has_failed
|
||||
for val in ["msg", "memset_api"]:
|
||||
if val is not None:
|
||||
retvals[val] = eval(val)
|
||||
retvals["msg"] = None
|
||||
retvals["memset_api"] = memset_api
|
||||
|
||||
return retvals
|
||||
|
||||
|
|
|
|||
|
|
@ -272,9 +272,9 @@ def create_or_delete(args=None):
|
|||
|
||||
retvals["failed"] = has_failed
|
||||
retvals["changed"] = has_changed
|
||||
for val in ["msg", "stderr", "memset_api"]:
|
||||
if val is not None:
|
||||
retvals[val] = eval(val)
|
||||
retvals["msg"] = msg
|
||||
retvals["stderr"] = stderr
|
||||
retvals["memset_api"] = memset_api
|
||||
|
||||
return retvals
|
||||
|
||||
|
|
|
|||
|
|
@ -513,8 +513,6 @@ def main():
|
|||
)
|
||||
|
||||
service = module.params["name"]
|
||||
rc = 0
|
||||
out = err = ""
|
||||
result = {
|
||||
"name": service,
|
||||
"changed": False,
|
||||
|
|
|
|||
|
|
@ -89,7 +89,7 @@ try:
|
|||
import pyodbc
|
||||
|
||||
HAS_PYODBC = True
|
||||
except ImportError as e:
|
||||
except ImportError:
|
||||
HAS_PYODBC = False
|
||||
|
||||
|
||||
|
|
@ -153,7 +153,7 @@ def main():
|
|||
result["description"].append(description)
|
||||
|
||||
result["row_count"] = cursor.rowcount
|
||||
except pyodbc.ProgrammingError as pe:
|
||||
except pyodbc.ProgrammingError:
|
||||
pass
|
||||
except Exception as e:
|
||||
module.fail_json(msg=f"Exception while reading rows: {e}")
|
||||
|
|
|
|||
|
|
@ -437,7 +437,7 @@ def change_service_permissions(module, auth, service_id, permissions):
|
|||
data = {"action": {"perform": "chmod", "params": {"octet": permissions}}}
|
||||
|
||||
try:
|
||||
status_result = open_url(
|
||||
open_url(
|
||||
f"{auth.url}/service/{service_id!s}/action",
|
||||
method="POST",
|
||||
force_basic_auth=True,
|
||||
|
|
@ -453,7 +453,7 @@ def change_service_owner(module, auth, service_id, owner_id):
|
|||
data = {"action": {"perform": "chown", "params": {"owner_id": owner_id}}}
|
||||
|
||||
try:
|
||||
status_result = open_url(
|
||||
open_url(
|
||||
f"{auth.url}/service/{service_id!s}/action",
|
||||
method="POST",
|
||||
force_basic_auth=True,
|
||||
|
|
@ -469,7 +469,7 @@ def change_service_group(module, auth, service_id, group_id):
|
|||
data = {"action": {"perform": "chgrp", "params": {"group_id": group_id}}}
|
||||
|
||||
try:
|
||||
status_result = open_url(
|
||||
open_url(
|
||||
f"{auth.url}/service/{service_id!s}/action",
|
||||
method="POST",
|
||||
force_basic_auth=True,
|
||||
|
|
@ -666,7 +666,7 @@ def delete_service(module, auth, service_id):
|
|||
return service_info
|
||||
|
||||
try:
|
||||
result = open_url(
|
||||
open_url(
|
||||
f"{auth.url}/service/{service_id!s}",
|
||||
method="DELETE",
|
||||
force_basic_auth=True,
|
||||
|
|
|
|||
|
|
@ -1271,7 +1271,6 @@ def create_exact_count_of_vms(
|
|||
vm_count_diff = exact_count - len(vm_list)
|
||||
changed = vm_count_diff != 0
|
||||
|
||||
new_vms_list = []
|
||||
instances_list = []
|
||||
tagged_instances_list = vm_list
|
||||
|
||||
|
|
|
|||
|
|
@ -179,7 +179,7 @@ def main():
|
|||
backend_name = module.params["backend"]
|
||||
name = module.params["name"]
|
||||
value = module.params["value"]
|
||||
state = module.params["state"]
|
||||
# state = module.params["state"] TODO - ???
|
||||
|
||||
if module.params["password"] is not None:
|
||||
password_method = ["-w", password]
|
||||
|
|
|
|||
|
|
@ -112,7 +112,6 @@ def main():
|
|||
application_key = module.params.get("application_key")
|
||||
application_secret = module.params.get("application_secret")
|
||||
consumer_key = module.params.get("consumer_key")
|
||||
project = ""
|
||||
instance = ""
|
||||
ovh_billing_status = ""
|
||||
|
||||
|
|
@ -129,7 +128,7 @@ def main():
|
|||
|
||||
# Check that the instance exists
|
||||
try:
|
||||
project = client.get(f"/cloud/project/{project_id}")
|
||||
client.get(f"/cloud/project/{project_id}")
|
||||
except ovh.exceptions.ResourceNotFoundError:
|
||||
module.fail_json(msg=f"project {project_id} does not exist")
|
||||
|
||||
|
|
|
|||
|
|
@ -71,13 +71,11 @@ def query_package(module, name):
|
|||
|
||||
# Assume that if we have pkg_info, we haven't upgraded to pkgng
|
||||
if pkg_info_path:
|
||||
pkgng = False
|
||||
pkg_glob_path = module.get_bin_path("pkg_glob", True)
|
||||
module.get_bin_path("pkg_glob", True)
|
||||
# TODO: convert run_comand() argument to list!
|
||||
rc, out, err = module.run_command(f"{pkg_info_path} -e `pkg_glob {shlex_quote(name)}`", use_unsafe_shell=True)
|
||||
pkg_info_path = [pkg_info_path]
|
||||
else:
|
||||
pkgng = True
|
||||
pkg_info_path = [module.get_bin_path("pkg", True), "info"]
|
||||
rc, out, err = module.run_command(pkg_info_path + [name])
|
||||
|
||||
|
|
|
|||
|
|
@ -1118,11 +1118,11 @@ def main():
|
|||
password = module.params["password"]
|
||||
token = module.params["token"]
|
||||
server_hostname = module.params["server_hostname"]
|
||||
server_insecure = module.params["server_insecure"]
|
||||
server_prefix = module.params["server_prefix"]
|
||||
server_port = module.params["server_port"]
|
||||
rhsm_baseurl = module.params["rhsm_baseurl"]
|
||||
rhsm_repo_ca_cert = module.params["rhsm_repo_ca_cert"]
|
||||
# TODO - no longer used? module.params["server_insecure"]
|
||||
# TODO - no longer used? module.params["server_prefix"]
|
||||
# TODO - no longer used? module.params["server_port"]
|
||||
# TODO - no longer used? module.params["rhsm_baseurl"]
|
||||
# TODO - no longer used? module.params["rhsm_repo_ca_cert"]
|
||||
auto_attach = module.params["auto_attach"]
|
||||
activationkey = module.params["activationkey"]
|
||||
org_id = module.params["org_id"]
|
||||
|
|
@ -1142,10 +1142,10 @@ def main():
|
|||
consumer_name = module.params["consumer_name"]
|
||||
consumer_id = module.params["consumer_id"]
|
||||
force_register = module.params["force_register"]
|
||||
server_proxy_hostname = module.params["server_proxy_hostname"]
|
||||
server_proxy_port = module.params["server_proxy_port"]
|
||||
server_proxy_user = module.params["server_proxy_user"]
|
||||
server_proxy_password = module.params["server_proxy_password"]
|
||||
# TODO - no longer used? module.params["server_proxy_hostname"]
|
||||
# TODO - no longer used? module.params["server_proxy_port"]
|
||||
# TODO - no longer used? module.params["server_proxy_user"]
|
||||
# TODO - no longer used? module.params["server_proxy_password"]
|
||||
release = module.params["release"]
|
||||
syspurpose = module.params["syspurpose"]
|
||||
|
||||
|
|
|
|||
|
|
@ -126,7 +126,7 @@ def main():
|
|||
res = redis.connection.get(key)
|
||||
if res is not None:
|
||||
value = float(res)
|
||||
except ValueError as e:
|
||||
except ValueError:
|
||||
msg = f"Value: {res} of key: {key} is not incrementable(int or float)"
|
||||
result["msg"] = msg
|
||||
module.fail_json(**result)
|
||||
|
|
|
|||
|
|
@ -120,7 +120,7 @@ def core(module):
|
|||
present_sshkeys = []
|
||||
try:
|
||||
present_sshkeys = extract_present_sshkeys(organization_json)
|
||||
except (KeyError, IndexError) as e:
|
||||
except (KeyError, IndexError):
|
||||
module.fail_json(changed=False, data="Error while extracting present SSH keys from API")
|
||||
|
||||
if state in ("present",):
|
||||
|
|
|
|||
|
|
@ -504,7 +504,7 @@ xsi:schemaLocation="http://www.ca.com/spectrum/restful/schema/request ../../../x
|
|||
self.result["msg"] = self.success_msg
|
||||
self.result["changed"] = True
|
||||
continue
|
||||
resp = self.update_model(Model_Handle, {req_name: req_val})
|
||||
self.update_model(Model_Handle, {req_name: req_val})
|
||||
|
||||
self.module.exit_json(**self.result)
|
||||
|
||||
|
|
|
|||
|
|
@ -906,7 +906,7 @@ def handle_elastigroup(client, module):
|
|||
grace_period=roll_config.get("grace_period"),
|
||||
health_check_type=roll_config.get("health_check_type"),
|
||||
)
|
||||
roll_response = client.roll_group(group_roll=eg_roll, group_id=group_id)
|
||||
client.roll_group(group_roll=eg_roll, group_id=group_id)
|
||||
message = "Updated and started rolling the group successfully."
|
||||
|
||||
except SpotinstClientException as exc:
|
||||
|
|
|
|||
|
|
@ -264,7 +264,7 @@ def main():
|
|||
|
||||
svc = Svc(module)
|
||||
changed = False
|
||||
orig_state = svc.report()
|
||||
dummy_orig_state = svc.report() # TODO - is this not needed?
|
||||
|
||||
if enabled is not None and enabled != svc.enabled:
|
||||
changed = True
|
||||
|
|
|
|||
|
|
@ -500,7 +500,7 @@ def create_payload(module, uuid):
|
|||
|
||||
try:
|
||||
vmdef_json = json.dumps(vmdef)
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
module.fail_json(msg="Could not create valid JSON payload", exception=traceback.format_exc())
|
||||
|
||||
# Create the temporary file that contains our payload, and set tight
|
||||
|
|
|
|||
|
|
@ -1370,7 +1370,7 @@ class XenServerVM(XenServerObject):
|
|||
# do not support subargument specs.
|
||||
try:
|
||||
num_cpus = int(num_cpus)
|
||||
except ValueError as e:
|
||||
except ValueError:
|
||||
self.module.fail_json(msg="VM check hardware.num_cpus: parameter should be an integer value!")
|
||||
|
||||
if num_cpus < 1:
|
||||
|
|
@ -1392,7 +1392,7 @@ class XenServerVM(XenServerObject):
|
|||
# do not support subargument specs.
|
||||
try:
|
||||
num_cpu_cores_per_socket = int(num_cpu_cores_per_socket)
|
||||
except ValueError as e:
|
||||
except ValueError:
|
||||
self.module.fail_json(
|
||||
msg="VM check hardware.num_cpu_cores_per_socket: parameter should be an integer value!"
|
||||
)
|
||||
|
|
@ -1423,7 +1423,7 @@ class XenServerVM(XenServerObject):
|
|||
# do not support subargument specs.
|
||||
try:
|
||||
memory_mb = int(memory_mb)
|
||||
except ValueError as e:
|
||||
except ValueError:
|
||||
self.module.fail_json(msg="VM check hardware.memory_mb: parameter should be an integer value!")
|
||||
|
||||
if memory_mb < 1:
|
||||
|
|
|
|||
|
|
@ -222,7 +222,7 @@ def main():
|
|||
)
|
||||
try:
|
||||
pwd.getpwnam(name)
|
||||
except KeyError as e:
|
||||
except KeyError:
|
||||
module.fail_json(msg=f"User '{name}' does not exist.", **result)
|
||||
|
||||
elif quota_type == "group":
|
||||
|
|
@ -238,7 +238,7 @@ def main():
|
|||
)
|
||||
try:
|
||||
grp.getgrnam(name)
|
||||
except KeyError as e:
|
||||
except KeyError:
|
||||
module.fail_json(msg=f"User '{name}' does not exist.", **result)
|
||||
|
||||
elif quota_type == "project":
|
||||
|
|
|
|||
|
|
@ -920,7 +920,6 @@ def main():
|
|||
input_type = module.params["input_type"]
|
||||
print_match = module.params["print_match"]
|
||||
count = module.params["count"]
|
||||
backup = module.params["backup"]
|
||||
strip_cdata_tags = module.params["strip_cdata_tags"]
|
||||
insertbefore = module.params["insertbefore"]
|
||||
insertafter = module.params["insertafter"]
|
||||
|
|
|
|||
|
|
@ -22,7 +22,6 @@ ignore = [
|
|||
"UP045", # Use `X | None` for type annotations - needs Python 3.10+
|
||||
# To fix:
|
||||
"E721", # Type comparison
|
||||
"F841", # Unused variable
|
||||
"UP014", # Convert `xxx` from `NamedTuple` functional to class syntax
|
||||
"UP024", # Replace aliased errors with `OSError`
|
||||
"UP028", # Replace `yield` over `for` loop with `yield from`
|
||||
|
|
|
|||
|
|
@ -226,7 +226,6 @@ class TestLookupModule(unittest.TestCase):
|
|||
with patch("ansible_collections.community.general.plugins.lookup.bitwarden._bitwarden", mock_bitwarden):
|
||||
record = MOCK_RECORDS[0]
|
||||
record_name = record["name"]
|
||||
session = "session"
|
||||
|
||||
self.lookup.run([record_name], field=None)
|
||||
self.assertIsNone(mock_bitwarden.session)
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ class MockLPass(LPass):
|
|||
p = ArgumentParser()
|
||||
sp = p.add_subparsers(help="command", dest="subparser_name")
|
||||
|
||||
logout_p = sp.add_parser("logout", parents=[base_options], help="logout")
|
||||
sp.add_parser("logout", parents=[base_options], help="logout")
|
||||
show_p = sp.add_parser("show", parents=[base_options], help="show entry details")
|
||||
|
||||
field_group = show_p.add_mutually_exclusive_group(required=True)
|
||||
|
|
|
|||
|
|
@ -613,4 +613,4 @@ class TestPritunlApi:
|
|||
api.pritunl_auth_request = get_pritunl_error_mock()
|
||||
|
||||
with pytest.raises(api.PritunlException):
|
||||
response = api.list_pritunl_organizations(**pritunl_settings)
|
||||
api.list_pritunl_organizations(**pritunl_settings)
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ def module():
|
|||
|
||||
|
||||
def test_wrong_name(module):
|
||||
with deps.declare("sys") as sys_dep:
|
||||
with deps.declare("sys"):
|
||||
import sys # noqa: F401, pylint: disable=unused-import
|
||||
|
||||
with pytest.raises(KeyError):
|
||||
|
|
|
|||
|
|
@ -51,5 +51,5 @@ def test_saslprep_conversions(source, target):
|
|||
|
||||
@pytest.mark.parametrize("source,exception", INVALID)
|
||||
def test_saslprep_exceptions(source, exception):
|
||||
with pytest.raises(exception) as ex:
|
||||
with pytest.raises(exception):
|
||||
saslprep(source)
|
||||
|
|
|
|||
|
|
@ -14,9 +14,7 @@ from .common import fake_xenapi_ref
|
|||
|
||||
def test_get_object_ref_xenapi_failure(mocker, fake_ansible_module, XenAPI, xenserver):
|
||||
"""Tests catching of XenAPI failures."""
|
||||
mocked_xenapi = mocker.patch.object(
|
||||
XenAPI.Session, "xenapi_request", side_effect=XenAPI.Failure("Fake XAPI method call error!")
|
||||
)
|
||||
mocker.patch.object(XenAPI.Session, "xenapi_request", side_effect=XenAPI.Failure("Fake XAPI method call error!"))
|
||||
|
||||
with pytest.raises(FailJsonException) as exc_info:
|
||||
xenserver.get_object_ref(fake_ansible_module, "name")
|
||||
|
|
@ -37,9 +35,7 @@ def test_get_object_ref_bad_uuid_and_name(mocker, fake_ansible_module, XenAPI, x
|
|||
|
||||
def test_get_object_ref_uuid_not_found(mocker, fake_ansible_module, XenAPI, xenserver):
|
||||
"""Tests when object is not found by uuid."""
|
||||
mocked_xenapi = mocker.patch.object(
|
||||
XenAPI.Session, "xenapi_request", side_effect=XenAPI.Failure("Fake XAPI not found error!")
|
||||
)
|
||||
mocker.patch.object(XenAPI.Session, "xenapi_request", side_effect=XenAPI.Failure("Fake XAPI not found error!"))
|
||||
|
||||
with pytest.raises(FailJsonException) as exc_info:
|
||||
xenserver.get_object_ref(fake_ansible_module, "name", uuid="fake-uuid", msg_prefix="Test: ")
|
||||
|
|
@ -52,7 +48,7 @@ def test_get_object_ref_uuid_not_found(mocker, fake_ansible_module, XenAPI, xens
|
|||
|
||||
def test_get_object_ref_name_not_found(mocker, fake_ansible_module, XenAPI, xenserver):
|
||||
"""Tests when object is not found by name."""
|
||||
mocked_xenapi = mocker.patch.object(XenAPI.Session, "xenapi_request", return_value=[])
|
||||
mocker.patch.object(XenAPI.Session, "xenapi_request", return_value=[])
|
||||
|
||||
with pytest.raises(FailJsonException) as exc_info:
|
||||
xenserver.get_object_ref(fake_ansible_module, "name", msg_prefix="Test: ")
|
||||
|
|
@ -63,9 +59,7 @@ def test_get_object_ref_name_not_found(mocker, fake_ansible_module, XenAPI, xens
|
|||
|
||||
def test_get_object_ref_name_multiple_found(mocker, fake_ansible_module, XenAPI, xenserver):
|
||||
"""Tests when multiple objects are found by name."""
|
||||
mocked_xenapi = mocker.patch.object(
|
||||
XenAPI.Session, "xenapi_request", return_value=[fake_xenapi_ref("VM"), fake_xenapi_ref("VM")]
|
||||
)
|
||||
mocker.patch.object(XenAPI.Session, "xenapi_request", return_value=[fake_xenapi_ref("VM"), fake_xenapi_ref("VM")])
|
||||
|
||||
error_msg = "Test: multiple VMs with name 'name' found! Please use UUID."
|
||||
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ def test_xapi_connect_local_session(mocker, fake_ansible_module, XenAPI, xenserv
|
|||
"""Tests that connection to localhost uses XenAPI.xapi_local() function."""
|
||||
mocker.patch("XenAPI.xapi_local")
|
||||
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
XenAPI.xapi_local.assert_called_once()
|
||||
|
||||
|
|
@ -88,7 +88,7 @@ def test_xapi_connect_local_login(mocker, fake_ansible_module, XenAPI, xenserver
|
|||
"""Tests that connection to localhost uses empty username and password."""
|
||||
mocker.patch.object(XenAPI.Session, "login_with_password", create=True)
|
||||
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
XenAPI.Session.login_with_password.assert_called_once_with("", "", ANSIBLE_VERSION, "Ansible")
|
||||
|
||||
|
|
@ -100,7 +100,7 @@ def test_xapi_connect_login(mocker, fake_ansible_module, XenAPI, xenserver):
|
|||
"""
|
||||
mocker.patch.object(XenAPI.Session, "login_with_password", create=True)
|
||||
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
username = fake_ansible_module.params["username"]
|
||||
password = fake_ansible_module.params["password"]
|
||||
|
|
@ -119,7 +119,7 @@ def test_xapi_connect_login_failure(mocker, fake_ansible_module, XenAPI, xenserv
|
|||
username = fake_ansible_module.params["username"]
|
||||
|
||||
with pytest.raises(FailJsonException) as exc_info:
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
assert (
|
||||
exc_info.value.kwargs["msg"]
|
||||
|
|
@ -137,7 +137,7 @@ def test_xapi_connect_remote_scheme(mocker, fake_ansible_module, XenAPI, xenserv
|
|||
"""Tests that explicit scheme in hostname param is preserved."""
|
||||
mocker.patch("XenAPI.Session")
|
||||
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
hostname = fake_ansible_module.params["hostname"]
|
||||
ignore_ssl = not fake_ansible_module.params["validate_certs"]
|
||||
|
|
@ -155,7 +155,7 @@ def test_xapi_connect_remote_no_scheme(mocker, fake_ansible_module, XenAPI, xens
|
|||
"""Tests that proper scheme is prepended to hostname without scheme."""
|
||||
mocker.patch("XenAPI.Session")
|
||||
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
hostname = fake_ansible_module.params["hostname"]
|
||||
ignore_ssl = not fake_ansible_module.params["validate_certs"]
|
||||
|
|
@ -168,11 +168,10 @@ def test_xapi_connect_support_ignore_ssl(mocker, fake_ansible_module, XenAPI, xe
|
|||
mocked_session = mocker.patch("XenAPI.Session")
|
||||
mocked_session.side_effect = TypeError()
|
||||
|
||||
with pytest.raises(TypeError) as exc_info:
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module)
|
||||
with pytest.raises(TypeError):
|
||||
xenserver.XAPI.connect(fake_ansible_module)
|
||||
|
||||
hostname = fake_ansible_module.params["hostname"]
|
||||
ignore_ssl = not fake_ansible_module.params["validate_certs"]
|
||||
|
||||
XenAPI.Session.assert_called_with(f"http://{hostname}")
|
||||
|
||||
|
|
@ -181,7 +180,7 @@ def test_xapi_connect_no_disconnect_atexit(mocker, fake_ansible_module, XenAPI,
|
|||
"""Tests skipping registration of atexit disconnect handler."""
|
||||
mocker.patch("atexit.register")
|
||||
|
||||
xapi_session = xenserver.XAPI.connect(fake_ansible_module, disconnect_atexit=False)
|
||||
xenserver.XAPI.connect(fake_ansible_module, disconnect_atexit=False)
|
||||
|
||||
atexit.register.assert_not_called()
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ class OneViewBaseTest:
|
|||
testing_module = getattr(oneview_module, testing_module)
|
||||
try:
|
||||
# Load scenarios from module examples (Also checks if it is a valid yaml)
|
||||
EXAMPLES = yaml.safe_load(testing_module.EXAMPLES)
|
||||
yaml.safe_load(testing_module.EXAMPLES)
|
||||
|
||||
except yaml.scanner.ScannerError:
|
||||
message = f"Something went wrong while parsing yaml from {self.testing_class.__module__}.EXAMPLES"
|
||||
|
|
|
|||
|
|
@ -77,7 +77,6 @@ class TestInterfacesFileModule(unittest.TestCase):
|
|||
string = json.dumps(ifaces, sort_keys=True, indent=4, separators=(",", ": "))
|
||||
if string and not string.endswith("\n"):
|
||||
string += "\n"
|
||||
goldenstring = string
|
||||
goldenData = ifaces
|
||||
if not os.path.isfile(testfilepath):
|
||||
with open(testfilepath, "wb") as f:
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ class JenkinsBuildMock:
|
|||
instance = JenkinsMock()
|
||||
response = JenkinsMock.get_build_info(instance, "host-delete", 1234)
|
||||
return response
|
||||
except jenkins.JenkinsException as e:
|
||||
except jenkins.JenkinsException:
|
||||
response = {}
|
||||
response["result"] = "ABSENT"
|
||||
return response
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ class TestKeycloakRealmRole(ModuleTestCase):
|
|||
with set_module_args(module_args):
|
||||
with mock_good_connection():
|
||||
with patch_keycloak_api(get_realm_info_by_id=return_value) as (mock_get_realm_info_by_id):
|
||||
with self.assertRaises(AnsibleExitJson) as exec_info:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
self.module.main()
|
||||
|
||||
self.assertEqual(len(mock_get_realm_info_by_id.mock_calls), 1)
|
||||
|
|
|
|||
|
|
@ -33,7 +33,6 @@ def _create_wrapper(text_as_string):
|
|||
def _build_mocked_request(get_id_user_count, response_dict):
|
||||
def _mocked_requests(*args, **kwargs):
|
||||
url = args[0]
|
||||
method = kwargs["method"]
|
||||
future_response = response_dict.get(url, None)
|
||||
if callable(future_response):
|
||||
return future_response()
|
||||
|
|
|
|||
|
|
@ -342,7 +342,6 @@ class TestKeycloakUserFederation(ModuleTestCase):
|
|||
}
|
||||
],
|
||||
]
|
||||
return_value_component_delete = [None]
|
||||
return_value_component_create = [
|
||||
{
|
||||
"id": "eb691537-b73c-4cd8-b481-6031c26499d8",
|
||||
|
|
|
|||
|
|
@ -155,7 +155,7 @@ class TestPacman:
|
|||
def test_success(self, mock_empty_inventory):
|
||||
with set_module_args({"update_cache": True}): # Simplest args to let init go through
|
||||
P = pacman.Pacman(pacman.setup_module())
|
||||
with pytest.raises(AnsibleExitJson) as e:
|
||||
with pytest.raises(AnsibleExitJson):
|
||||
P.success()
|
||||
|
||||
def test_fail(self, mock_empty_inventory):
|
||||
|
|
|
|||
|
|
@ -672,7 +672,7 @@ gpg: imported: 1
|
|||
|
||||
@pytest.fixture
|
||||
def patch_get_bin_path(mocker):
|
||||
get_bin_path = mocker.patch.object(
|
||||
mocker.patch.object(
|
||||
AnsibleModule,
|
||||
"get_bin_path",
|
||||
return_value=MOCK_BIN_PATH,
|
||||
|
|
@ -689,7 +689,7 @@ def patch_get_bin_path(mocker):
|
|||
def test_operation(mocker, capfd, patch_get_bin_path, expected):
|
||||
# patch run_command invocations with mock data
|
||||
if "run_command.calls" in expected:
|
||||
mock_run_command = mocker.patch.object(
|
||||
mocker.patch.object(
|
||||
AnsibleModule,
|
||||
"run_command",
|
||||
side_effect=[item[2] for item in expected["run_command.calls"]],
|
||||
|
|
@ -697,7 +697,7 @@ def test_operation(mocker, capfd, patch_get_bin_path, expected):
|
|||
|
||||
# patch save_key invocations with mock data
|
||||
if "save_key_output" in expected:
|
||||
mock_save_key = mocker.patch.object(
|
||||
mocker.patch.object(
|
||||
pacman_key.PacmanKey,
|
||||
"save_key",
|
||||
return_value=expected["save_key_output"],
|
||||
|
|
|
|||
|
|
@ -73,8 +73,8 @@ class TestPritunlOrg(ModuleTestCase):
|
|||
)
|
||||
):
|
||||
# Test creation
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock) as mock_get:
|
||||
with self.patch_add_pritunl_organization(side_effect=PritunlPostOrganizationMock) as mock_add:
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock):
|
||||
with self.patch_add_pritunl_organization(side_effect=PritunlPostOrganizationMock):
|
||||
with self.assertRaises(AnsibleExitJson) as create_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
@ -85,8 +85,8 @@ class TestPritunlOrg(ModuleTestCase):
|
|||
self.assertEqual(create_exc["response"]["user_count"], 0)
|
||||
|
||||
# Test module idempotency
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationAfterPostMock) as mock_get:
|
||||
with self.patch_add_pritunl_organization(side_effect=PritunlPostOrganizationMock) as mock_add:
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationAfterPostMock):
|
||||
with self.patch_add_pritunl_organization(side_effect=PritunlPostOrganizationMock):
|
||||
with self.assertRaises(AnsibleExitJson) as idempotent_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
@ -115,8 +115,8 @@ class TestPritunlOrg(ModuleTestCase):
|
|||
)
|
||||
):
|
||||
# Test deletion
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationAfterPostMock) as mock_get:
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock) as mock_delete:
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationAfterPostMock):
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock):
|
||||
with self.assertRaises(AnsibleExitJson) as delete_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
@ -126,8 +126,8 @@ class TestPritunlOrg(ModuleTestCase):
|
|||
self.assertEqual(delete_exc["response"], {})
|
||||
|
||||
# Test module idempotency
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock) as mock_get:
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock) as mock_add:
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock):
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock):
|
||||
with self.assertRaises(AnsibleExitJson) as idempotent_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
@ -149,8 +149,8 @@ class TestPritunlOrg(ModuleTestCase):
|
|||
}
|
||||
with set_module_args(module_args):
|
||||
# Test deletion
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock) as mock_get:
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock) as mock_delete:
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock):
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock):
|
||||
with self.assertRaises(AnsibleFailJson) as failure_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
@ -160,10 +160,8 @@ class TestPritunlOrg(ModuleTestCase):
|
|||
|
||||
# Switch force=True which should run successfully
|
||||
with set_module_args(dict_merge(module_args, {"force": True})):
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock) as mock_get:
|
||||
with self.patch_delete_pritunl_organization(
|
||||
side_effect=PritunlDeleteOrganizationMock
|
||||
) as mock_delete:
|
||||
with self.patch_get_pritunl_organizations(side_effect=PritunlListOrganizationMock):
|
||||
with self.patch_delete_pritunl_organization(side_effect=PritunlDeleteOrganizationMock):
|
||||
with self.assertRaises(AnsibleExitJson) as delete_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
|
|||
|
|
@ -103,7 +103,7 @@ class TestPritunlUser(ModuleTestCase):
|
|||
user_params,
|
||||
)
|
||||
):
|
||||
with self.patch_update_pritunl_users(side_effect=PritunlPostUserMock) as post_mock:
|
||||
with self.patch_update_pritunl_users(side_effect=PritunlPostUserMock):
|
||||
with self.assertRaises(AnsibleExitJson) as create_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
@ -132,7 +132,7 @@ class TestPritunlUser(ModuleTestCase):
|
|||
new_user_params,
|
||||
)
|
||||
):
|
||||
with self.patch_update_pritunl_users(side_effect=PritunlPutUserMock) as put_mock:
|
||||
with self.patch_update_pritunl_users(side_effect=PritunlPutUserMock):
|
||||
with self.assertRaises(AnsibleExitJson) as update_result:
|
||||
self.module.main()
|
||||
|
||||
|
|
|
|||
|
|
@ -44,9 +44,7 @@ def test_without_required_parameters_unregistered(mocker, capfd, patch_redhat_su
|
|||
"""
|
||||
Failure must occurs when all parameters are missing
|
||||
"""
|
||||
mock_run_command = mocker.patch.object(
|
||||
basic.AnsibleModule, "run_command", return_value=(1, "This system is not yet registered.", "")
|
||||
)
|
||||
mocker.patch.object(basic.AnsibleModule, "run_command", return_value=(1, "This system is not yet registered.", ""))
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
redhat_subscription.main()
|
||||
|
|
@ -63,7 +61,7 @@ def test_without_required_parameters_registered(mocker, capfd, patch_redhat_subs
|
|||
System already registered, no parameters required (state=present is the
|
||||
default)
|
||||
"""
|
||||
mock_run_command = mocker.patch.object(
|
||||
mocker.patch.object(
|
||||
basic.AnsibleModule,
|
||||
"run_command",
|
||||
return_value=(0, "system identity: b26df632-25ed-4452-8f89-0308bfd167cb", ""),
|
||||
|
|
@ -888,7 +886,7 @@ def test_redhat_subscription(mocker, capfd, patch_redhat_subscription, testcase)
|
|||
|
||||
# Mock function used for running commands first
|
||||
call_results = [item[2] for item in testcase["run_command.calls"]]
|
||||
mock_run_command = mocker.patch.object(basic.AnsibleModule, "run_command", side_effect=call_results)
|
||||
mocker.patch.object(basic.AnsibleModule, "run_command", side_effect=call_results)
|
||||
|
||||
# Try to run test case
|
||||
with pytest.raises(SystemExit):
|
||||
|
|
@ -1266,7 +1264,7 @@ def test_redhat_subscription_syspurpose(
|
|||
|
||||
# Mock function used for running commands first
|
||||
call_results = [item[2] for item in testcase["run_command.calls"]]
|
||||
mock_run_command = mocker.patch.object(basic.AnsibleModule, "run_command", side_effect=call_results)
|
||||
mocker.patch.object(basic.AnsibleModule, "run_command", side_effect=call_results)
|
||||
|
||||
mock_syspurpose_file = tmpdir.mkdir("syspurpose").join("syspurpose.json")
|
||||
# When there there are some existing syspurpose attributes specified, then
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ if tuple(map(int, __version__.split("."))) < (3, 4, 0):
|
|||
|
||||
def test_redis_data_without_arguments(capfd):
|
||||
with set_module_args({}):
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
redis_data.main()
|
||||
out, err = capfd.readouterr()
|
||||
assert not err
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ if HAS_REDIS_USERNAME_OPTION:
|
|||
|
||||
def test_redis_data_incr_without_arguments(capfd):
|
||||
with set_module_args({}):
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
redis_data_incr.main()
|
||||
out, err = capfd.readouterr()
|
||||
assert not err
|
||||
|
|
|
|||
|
|
@ -773,7 +773,7 @@ def test_rhsm_repository(mocker, capfd, patch_rhsm_repository, testcase):
|
|||
|
||||
# Mock function used for running commands first
|
||||
call_results = [item[2] for item in testcase["run_command.calls"]]
|
||||
mock_run_command = mocker.patch.object(basic.AnsibleModule, "run_command", side_effect=call_results)
|
||||
mocker.patch.object(basic.AnsibleModule, "run_command", side_effect=call_results)
|
||||
|
||||
# Try to run test case
|
||||
with pytest.raises(SystemExit):
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ def response_remove_nics():
|
|||
|
||||
def test_scaleway_private_network_without_arguments(capfd):
|
||||
with set_module_args({}):
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_compute_private_network.main()
|
||||
out, err = capfd.readouterr()
|
||||
|
||||
|
|
@ -91,7 +91,7 @@ def test_scaleway_add_nic(capfd):
|
|||
mock_scw_get.return_value = response_without_nics()
|
||||
with patch.object(Scaleway, "post") as mock_scw_post:
|
||||
mock_scw_post.return_value = response_when_add_nics()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_compute_private_network.main()
|
||||
mock_scw_post.assert_any_call(path=url, data={"private_network_id": pnid})
|
||||
mock_scw_get.assert_any_call(url)
|
||||
|
|
@ -119,7 +119,7 @@ def test_scaleway_add_existing_nic(capfd):
|
|||
):
|
||||
with patch.object(Scaleway, "get") as mock_scw_get:
|
||||
mock_scw_get.return_value = response_with_nics()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_compute_private_network.main()
|
||||
mock_scw_get.assert_any_call(url)
|
||||
|
||||
|
|
@ -150,7 +150,7 @@ def test_scaleway_remove_existing_nic(capfd):
|
|||
mock_scw_get.return_value = response_with_nics()
|
||||
with patch.object(Scaleway, "delete") as mock_scw_delete:
|
||||
mock_scw_delete.return_value = response_remove_nics()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_compute_private_network.main()
|
||||
mock_scw_delete.assert_any_call(urlremove)
|
||||
mock_scw_get.assert_any_call(url)
|
||||
|
|
@ -179,7 +179,7 @@ def test_scaleway_remove_absent_nic(capfd):
|
|||
):
|
||||
with patch.object(Scaleway, "get") as mock_scw_get:
|
||||
mock_scw_get.return_value = response_without_nics()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_compute_private_network.main()
|
||||
mock_scw_get.assert_any_call(url)
|
||||
|
||||
|
|
|
|||
|
|
@ -74,7 +74,7 @@ def response_delete():
|
|||
|
||||
def test_scaleway_private_network_without_arguments(capfd):
|
||||
with set_module_args({}):
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_private_network.main()
|
||||
out, err = capfd.readouterr()
|
||||
|
||||
|
|
@ -97,7 +97,7 @@ def test_scaleway_create_pn(capfd):
|
|||
mock_scw_get.return_value = response_with_zero_network()
|
||||
with patch.object(Scaleway, "post") as mock_scw_post:
|
||||
mock_scw_post.return_value = response_create_new()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_private_network.main()
|
||||
mock_scw_post.assert_any_call(
|
||||
path="private-networks/",
|
||||
|
|
@ -124,7 +124,7 @@ def test_scaleway_existing_pn(capfd):
|
|||
os.environ["SCW_API_TOKEN"] = "notrealtoken"
|
||||
with patch.object(Scaleway, "get") as mock_scw_get:
|
||||
mock_scw_get.return_value = response_with_new_network()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_private_network.main()
|
||||
mock_scw_get.assert_any_call(
|
||||
"private-networks", params={"name": "new_network_name", "order_by": "name_asc", "page": 1, "page_size": 10}
|
||||
|
|
@ -152,7 +152,7 @@ def test_scaleway_add_tag_pn(capfd):
|
|||
mock_scw_get.return_value = response_with_new_network()
|
||||
with patch.object(Scaleway, "patch") as mock_scw_patch:
|
||||
mock_scw_patch.return_value = response_create_new_newtag()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_private_network.main()
|
||||
mock_scw_patch.assert_any_call(
|
||||
path="private-networks/c123b4cd-ef5g-678h-90i1-jk2345678l90",
|
||||
|
|
@ -184,7 +184,7 @@ def test_scaleway_remove_pn(capfd):
|
|||
mock_scw_get.return_value = response_with_new_network()
|
||||
with patch.object(Scaleway, "delete") as mock_scw_delete:
|
||||
mock_scw_delete.return_value = response_delete()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_private_network.main()
|
||||
mock_scw_delete.assert_any_call("private-networks/c123b4cd-ef5g-678h-90i1-jk2345678l90")
|
||||
mock_scw_get.assert_any_call(
|
||||
|
|
@ -211,7 +211,7 @@ def test_scaleway_absent_pn_not_exists(capfd):
|
|||
os.environ["SCW_API_TOKEN"] = "notrealtoken"
|
||||
with patch.object(Scaleway, "get") as mock_scw_get:
|
||||
mock_scw_get.return_value = response_with_zero_network()
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
scaleway_private_network.main()
|
||||
mock_scw_get.assert_any_call(
|
||||
"private-networks", params={"name": "new_network_name", "order_by": "name_asc", "page": 1, "page_size": 10}
|
||||
|
|
|
|||
|
|
@ -47,21 +47,21 @@ class TestStatsDModule(ModuleTestCase):
|
|||
|
||||
def test_udp_without_parameters(self):
|
||||
"""Test udp without parameters"""
|
||||
with self.patch_udp_statsd_client(side_effect=FakeStatsD) as fake_statsd:
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.patch_udp_statsd_client(side_effect=FakeStatsD):
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
with set_module_args({}):
|
||||
self.module.main()
|
||||
|
||||
def test_tcp_without_parameters(self):
|
||||
"""Test tcp without parameters"""
|
||||
with self.patch_tcp_statsd_client(side_effect=FakeStatsD) as fake_statsd:
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.patch_tcp_statsd_client(side_effect=FakeStatsD):
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
with set_module_args({}):
|
||||
self.module.main()
|
||||
|
||||
def test_udp_with_parameters(self):
|
||||
"""Test udp with parameters"""
|
||||
with self.patch_udp_statsd_client(side_effect=FakeStatsD) as fake_statsd:
|
||||
with self.patch_udp_statsd_client(side_effect=FakeStatsD):
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with set_module_args(
|
||||
{
|
||||
|
|
@ -73,7 +73,7 @@ class TestStatsDModule(ModuleTestCase):
|
|||
self.module.main()
|
||||
self.assertEqual(result.exception.args[0]["msg"], "Sent counter my_counter -> 1 to StatsD")
|
||||
self.assertEqual(result.exception.args[0]["changed"], True)
|
||||
with self.patch_udp_statsd_client(side_effect=FakeStatsD) as fake_statsd:
|
||||
with self.patch_udp_statsd_client(side_effect=FakeStatsD):
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with set_module_args(
|
||||
{
|
||||
|
|
@ -88,7 +88,7 @@ class TestStatsDModule(ModuleTestCase):
|
|||
|
||||
def test_tcp_with_parameters(self):
|
||||
"""Test tcp with parameters"""
|
||||
with self.patch_tcp_statsd_client(side_effect=FakeStatsD) as fake_statsd:
|
||||
with self.patch_tcp_statsd_client(side_effect=FakeStatsD):
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with set_module_args(
|
||||
{
|
||||
|
|
@ -101,7 +101,7 @@ class TestStatsDModule(ModuleTestCase):
|
|||
self.module.main()
|
||||
self.assertEqual(result.exception.args[0]["msg"], "Sent counter my_counter -> 1 to StatsD")
|
||||
self.assertEqual(result.exception.args[0]["changed"], True)
|
||||
with self.patch_tcp_statsd_client(side_effect=FakeStatsD) as fake_statsd:
|
||||
with self.patch_tcp_statsd_client(side_effect=FakeStatsD):
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with set_module_args(
|
||||
{
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ from ansible_collections.community.internal_test_tools.tests.unit.plugins.module
|
|||
|
||||
def test_terraform_without_argument(capfd):
|
||||
with set_module_args({}):
|
||||
with pytest.raises(SystemExit) as results:
|
||||
with pytest.raises(SystemExit):
|
||||
terraform.main()
|
||||
|
||||
out, err = capfd.readouterr()
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "virtual_media_insert") as mock_virtual_media_insert:
|
||||
mock_virtual_media_insert.return_value = {"ret": True, "changed": True, "msg": "success"}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_VirtualMediaEject_pass(self):
|
||||
|
|
@ -115,7 +115,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "virtual_media_eject") as mock_virtual_media_eject:
|
||||
mock_virtual_media_eject.return_value = {"ret": True, "changed": True, "msg": "success"}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_VirtualMediaEject_fail_when_required_args_missing(self):
|
||||
|
|
@ -144,7 +144,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetResource_fail_when_get_return_false(self):
|
||||
|
|
@ -161,7 +161,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": False, "msg": "404 error"}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetResource_pass(self):
|
||||
|
|
@ -178,7 +178,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetCollectionResource_fail_when_required_args_missing(self):
|
||||
|
|
@ -194,7 +194,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetCollectionResource_fail_when_get_return_false(self):
|
||||
|
|
@ -211,7 +211,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": False, "msg": "404 error"}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetCollectionResource_fail_when_get_not_colection(self):
|
||||
|
|
@ -228,7 +228,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetCollectionResource_pass_when_get_empty_collection(self):
|
||||
|
|
@ -245,7 +245,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "get_request") as mock_get_request:
|
||||
mock_get_request.return_value = {"ret": True, "data": {"Members": [], "Members@odata.count": 0}}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_GetCollectionResource_pass_when_get_collection(self):
|
||||
|
|
@ -265,7 +265,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
"data": {"Members": [{"@odata.id": "/redfish/v1/testuri/1"}], "Members@odata.count": 1},
|
||||
}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PatchResource_fail_when_required_args_missing(self):
|
||||
|
|
@ -287,7 +287,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "patch_request") as mock_patch_request:
|
||||
mock_patch_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PatchResource_fail_when_required_args_missing_no_requestbody(self):
|
||||
|
|
@ -310,7 +310,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "patch_request") as mock_patch_request:
|
||||
mock_patch_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PatchResource_fail_when_noexisting_property_in_requestbody(self):
|
||||
|
|
@ -334,7 +334,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "patch_request") as mock_patch_request:
|
||||
mock_patch_request.return_value = {"ret": True, "data": {"teststr": "xxxx"}}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PatchResource_fail_when_get_return_false(self):
|
||||
|
|
@ -358,7 +358,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "patch_request") as mock_patch_request:
|
||||
mock_patch_request.return_value = {"ret": False, "msg": "500 internal error"}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PatchResource_pass(self):
|
||||
|
|
@ -385,7 +385,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
"data": {"teststr": "yyyy", "@odata.etag": "322e0d45d9572723c98"},
|
||||
}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_required_args_missing(self):
|
||||
|
|
@ -420,7 +420,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_invalid_resourceuri(self):
|
||||
|
|
@ -456,7 +456,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_no_requestbody(self):
|
||||
|
|
@ -492,7 +492,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_no_requestbody_2(self):
|
||||
|
|
@ -528,7 +528,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_requestbody_mismatch_with_data_from_actioninfo_uri(self):
|
||||
|
|
@ -566,7 +566,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_get_return_false(self):
|
||||
|
|
@ -587,7 +587,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_fail_when_post_return_false(self):
|
||||
|
|
@ -624,7 +624,7 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": False, "msg": "500 internal error"}
|
||||
|
||||
with self.assertRaises(AnsibleFailJson) as result:
|
||||
with self.assertRaises(AnsibleFailJson):
|
||||
module.main()
|
||||
|
||||
def test_module_command_PostResource_pass(self):
|
||||
|
|
@ -661,5 +661,5 @@ class TestXCCRedfishCommand(unittest.TestCase):
|
|||
with patch.object(module.XCCRedfishUtils, "post_request") as mock_post_request:
|
||||
mock_post_request.return_value = {"ret": True, "msg": "post success"}
|
||||
|
||||
with self.assertRaises(AnsibleExitJson) as result:
|
||||
with self.assertRaises(AnsibleExitJson):
|
||||
module.main()
|
||||
|
|
|
|||
|
|
@ -320,7 +320,7 @@ def test_xenserver_guest_powerstate_wait(mocker, patch_ansible_module, capfd, Xe
|
|||
"ansible_collections.community.general.plugins.modules.xenserver_guest_powerstate.gather_vm_facts",
|
||||
return_value=fake_vm_facts,
|
||||
)
|
||||
mocked_set_vm_power_state = mocker.patch(
|
||||
mocker.patch(
|
||||
"ansible_collections.community.general.plugins.modules.xenserver_guest_powerstate.set_vm_power_state",
|
||||
return_value=(True, "somenewstate"),
|
||||
)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue