1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2026-02-04 07:51:50 +00:00

modules [lm]*: use f-strings (#10971)

* modules [lm]*: use f-strings

* add changelog frag
This commit is contained in:
Alexei Znamensky 2025-10-26 19:57:24 +13:00 committed by GitHub
parent 4a6a449fbd
commit b527e80307
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
47 changed files with 453 additions and 454 deletions

View file

@ -0,0 +1,47 @@
minor_changes:
- launchd - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- layman - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lbu - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- ldap_attrs - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- ldap_inc - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- ldap_search - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- librato_annotation - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- linode - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- linode_v4 - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- listen_ports_facts - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lldp - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- locale_gen - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- logentries - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- logentries_msg - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- logstash_plugin - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lvg - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lvg_rename - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lvm_pv - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lvm_pv_move_data - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lvol - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lxc_container - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lxd_container - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lxd_profile - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- lxd_project - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- macports - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mail - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- make - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- manageiq_alert_profiles - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- manageiq_alerts - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- manageiq_group - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- manageiq_provider - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- manageiq_tenant - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- manageiq_user - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mas - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mattermost - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- maven_artifact - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- memset_memstore_info - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- memset_server_info - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- memset_zone_domain - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- memset_zone_record - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mksysb - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- modprobe - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- monit - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mqtt - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mssql_db - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).
- mssql_script - use f-strings for string templating (https://github.com/ansible-collections/community.general/pull/10971).

View file

@ -127,7 +127,6 @@ from abc import ABCMeta, abstractmethod
from time import sleep
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
class ServiceState:
@ -156,15 +155,13 @@ class Plist:
if filename is not None:
self.__filename = filename
else:
self.__filename = '%s.plist' % service
self.__filename = f'{service}.plist'
state, pid, dummy, dummy = LaunchCtlList(module, self.__service).run()
self.__file = self.__find_service_plist(self.__filename)
if self.__file is None:
msg = 'Unable to find the plist file %s for service %s' % (
self.__filename, self.__service,
)
msg = f'Unable to find the plist file {self.__filename} for service {self.__service}'
if pid is None and state == ServiceState.UNLOADED:
msg += ' and it was not found among active services'
module.fail_json(msg=msg)
@ -202,8 +199,7 @@ class Plist:
with open(self.__file, 'rb') as plist_fp:
service_plist = plistlib.load(plist_fp)
except Exception as e:
module.fail_json(msg="Failed to read plist file "
"%s due to %s" % (self.__file, to_native(e)))
module.fail_json(msg=f"Failed to read plist file {self.__file} due to {e}")
return service_plist
def __write_plist_file(self, module, service_plist=None):
@ -214,8 +210,7 @@ class Plist:
with open(self.__file, 'wb') as plist_fp:
plistlib.dump(service_plist, plist_fp)
except Exception as e:
module.fail_json(msg="Failed to write to plist file "
" %s due to %s" % (self.__file, to_native(e)))
module.fail_json(msg=f"Failed to write to plist file {self.__file} due to {e}")
def __handle_param_enabled(self, module):
if module.params['enabled'] is not None:
@ -282,7 +277,7 @@ class LaunchCtlTask(metaclass=ABCMeta):
rc, out, err = self._launchctl("list")
if rc != 0:
self._module.fail_json(
msg='Failed to get status of %s' % (self._launch))
msg=f'Failed to get status of {self._launch}')
state = ServiceState.UNLOADED
service_pid = "-"
@ -348,11 +343,10 @@ class LaunchCtlTask(metaclass=ABCMeta):
'load', 'unload'] else self._service if command in ['start', 'stop'] else ""
rc, out, err = self._module.run_command(
'%s %s %s' % (self._launch, command, service_or_plist))
f'{self._launch} {command} {service_or_plist}')
if rc != 0:
msg = "Unable to %s '%s' (%s): '%s'" % (
command, self._service, self._plist.get_file(), err)
msg = f"Unable to {command} '{self._service}' ({self._plist.get_file()}): '{err}'"
self._module.fail_json(msg=msg)
return (rc, out, err)

View file

@ -125,13 +125,13 @@ def download_url(module, url, dest):
module.params['http_agent'] = USERAGENT
response, info = fetch_url(module, url)
if info['status'] != 200:
raise ModuleError("Failed to get %s: %s" % (url, info['msg']))
raise ModuleError(f"Failed to get {url}: {info['msg']}")
try:
with open(dest, 'w') as f:
shutil.copyfileobj(response, f)
except IOError as e:
raise ModuleError("Failed to write: %s" % str(e))
raise ModuleError(f"Failed to write: {e}")
def install_overlay(module, name, list_url=None):
@ -156,16 +156,15 @@ def install_overlay(module, name, list_url=None):
return False
if module.check_mode:
mymsg = 'Would add layman repo \'' + name + '\''
mymsg = f"Would add layman repo '{name}'"
module.exit_json(changed=True, msg=mymsg)
if not layman.is_repo(name):
if not list_url:
raise ModuleError("Overlay '%s' is not on the list of known "
"overlays and URL of the remote list was not provided." % name)
raise ModuleError(f"Overlay '{name}' is not on the list of known overlays and URL of the remote list was not provided.")
overlay_defs = layman_conf.get_option('overlay_defs')
dest = path.join(overlay_defs, name + '.xml')
dest = path.join(overlay_defs, f"{name}.xml")
download_url(module, list_url, dest)
@ -193,7 +192,7 @@ def uninstall_overlay(module, name):
return False
if module.check_mode:
mymsg = 'Would remove layman repo \'' + name + '\''
mymsg = f"Would remove layman repo '{name}'"
module.exit_json(changed=True, msg=mymsg)
layman.delete_repos(name)

View file

@ -104,7 +104,7 @@ def run_module():
if module.params[param]:
paths = run_lbu(param, '-l').split('\n')
for path in module.params[param]:
if os.path.normpath('/' + path)[1:] not in paths:
if os.path.normpath(f"/{path}")[1:] not in paths:
update = True
if module.params['commit']:

View file

@ -194,7 +194,7 @@ class LdapAttrs(LdapGeneric):
if isinstance(values, list):
for index, value in enumerate(values):
cleaned_value = re.sub(r'^\{\d+\}', '', value)
ordered_values.append('{' + str(index) + '}' + cleaned_value)
ordered_values.append(f"{{{index!s}}}{cleaned_value}")
return ordered_values
@ -254,7 +254,7 @@ class LdapAttrs(LdapGeneric):
results = self.connection.search_s(
self.dn, ldap.SCOPE_BASE, attrlist=[name])
except ldap.LDAPError as e:
self.fail("Cannot search for attribute %s" % name, e)
self.fail(f"Cannot search for attribute {name}", e)
current = results[0][1].get(name, [])
@ -277,7 +277,7 @@ class LdapAttrs(LdapGeneric):
""" True if the target attribute has the given value. """
try:
escaped_value = ldap.filter.escape_filter_chars(to_text(value))
filterstr = "(%s=%s)" % (name, escaped_value)
filterstr = f"({name}={escaped_value})"
dns = self.connection.search_s(self.dn, ldap.SCOPE_BASE, filterstr)
is_present = len(dns) == 1
except ldap.NO_SUCH_OBJECT:

View file

@ -202,7 +202,7 @@ def main():
result = mod.connection.search_ext_s(
base=mod.dn,
scope=ldap.SCOPE_BASE,
filterstr="(%s=*)" % mod.attr,
filterstr=f"({mod.attr}=*)",
attrlist=[mod.attr])
if len(result) != 1:
module.fail_json(msg="The entry does not exist or does not contain the specified attribute.")
@ -216,14 +216,14 @@ def main():
break
except ldap.NO_SUCH_ATTRIBUTE:
if tries == max_tries:
module.fail_json(msg="The increment could not be applied after " + str(max_tries) + " tries.")
module.fail_json(msg=f"The increment could not be applied after {max_tries} tries.")
return
else:
result = mod.connection.search_ext_s(
base=mod.dn,
scope=ldap.SCOPE_BASE,
filterstr="(%s=*)" % mod.attr,
filterstr=f"({mod.attr}=*)",
attrlist=[mod.attr])
if len(result) == 1:
ret = str(int(result[0][1][mod.attr][0]) + mod.increment)

View file

@ -234,7 +234,7 @@ class LdapSearch(LdapGeneric):
else:
return ldap_entries
except ldap.NO_SUCH_OBJECT:
self.module.fail_json(msg="Base not found: {0}".format(self.dn))
self.module.fail_json(msg=f"Base not found: {self.dn}")
if __name__ == '__main__':

View file

@ -117,7 +117,7 @@ def post_annotation(module):
name = module.params['name']
title = module.params['title']
url = 'https://metrics-api.librato.com/v1/annotations/%s' % name
url = f'https://metrics-api.librato.com/v1/annotations/{name}'
params = {}
params['title'] = title
@ -145,9 +145,9 @@ def post_annotation(module):
response_body = info['body']
if info['status'] != 201:
if info['status'] >= 400:
module.fail_json(msg="Request Failed. Response code: " + response_code + " Response body: " + response_body)
module.fail_json(msg=f"Request Failed. Response code: {response_code} Response body: {response_body}")
else:
module.fail_json(msg="Request Failed. Response code: " + response_code)
module.fail_json(msg=f"Request Failed. Response code: {response_code}")
response = response.read()
module.exit_json(changed=True, annotation=response)

View file

@ -367,7 +367,7 @@ def linodeServers(module, api, state, name,
if not servers:
for arg in (name, plan, distribution, datacenter):
if not arg:
module.fail_json(msg='%s is required for %s state' % (arg, state))
module.fail_json(msg=f'{arg} is required for {state} state')
# Create linode entity
new_server = True
@ -379,25 +379,25 @@ def linodeServers(module, api, state, name,
PaymentTerm=payment_term)
linode_id = res['LinodeID']
# Update linode Label to match name
api.linode_update(LinodeId=linode_id, Label='%s-%s' % (linode_id, name))
api.linode_update(LinodeId=linode_id, Label=f'{linode_id}-{name}')
# Update Linode with Ansible configuration options
api.linode_update(LinodeId=linode_id, LPM_DISPLAYGROUP=displaygroup, WATCHDOG=watchdog, **kwargs)
# Save server
servers = api.linode_list(LinodeId=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}")
# Add private IP to Linode
if private_ip:
try:
res = api.linode_ip_addprivate(LinodeID=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
if not disks:
for arg in (name, linode_id, distribution):
if not arg:
module.fail_json(msg='%s is required for %s state' % (arg, state))
module.fail_json(msg=f'{arg} is required for {state} state')
# Create disks (1 from distrib, 1 for SWAP)
new_server = True
try:
@ -413,18 +413,18 @@ def linodeServers(module, api, state, name,
res = api.linode_disk_createfromdistribution(
LinodeId=linode_id, DistributionID=distribution,
rootPass=password, rootSSHKey=ssh_pub_key,
Label='%s data disk (lid: %s)' % (name, linode_id),
Label=f'{name} data disk (lid: {linode_id})',
Size=size)
else:
res = api.linode_disk_createfromdistribution(
LinodeId=linode_id, DistributionID=distribution,
rootPass=password,
Label='%s data disk (lid: %s)' % (name, linode_id),
Label=f'{name} data disk (lid: {linode_id})',
Size=size)
jobs.append(res['JobID'])
# Create SWAP disk
res = api.linode_disk_create(LinodeId=linode_id, Type='swap',
Label='%s swap disk (lid: %s)' % (name, linode_id),
Label=f'{name} swap disk (lid: {linode_id})',
Size=swap)
# Create individually listed disks at specified size
if additional_disks:
@ -437,12 +437,12 @@ def linodeServers(module, api, state, name,
jobs.append(res['JobID'])
except Exception as e:
# TODO: destroy linode ?
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
if not configs:
for arg in (name, linode_id, distribution):
if not arg:
module.fail_json(msg='%s is required for %s state' % (arg, state))
module.fail_json(msg=f'{arg} is required for {state} state')
# Check architecture
for distrib in api.avail_distributions():
@ -456,7 +456,7 @@ def linodeServers(module, api, state, name,
# Get latest kernel matching arch if kernel_id is not specified
if not kernel_id:
for kernel in api.avail_kernels():
if not kernel['LABEL'].startswith('Latest %s' % arch):
if not kernel['LABEL'].startswith(f'Latest {arch}'):
continue
kernel_id = kernel['KERNELID']
break
@ -477,10 +477,10 @@ def linodeServers(module, api, state, name,
new_server = True
try:
api.linode_config_create(LinodeId=linode_id, KernelId=kernel_id,
Disklist=disks_list, Label='%s config' % name)
Disklist=disks_list, Label=f'{name} config')
configs = api.linode_config_list(LinodeId=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
# Start / Ensure servers are running
for server in servers:
@ -505,12 +505,11 @@ def linodeServers(module, api, state, name,
time.sleep(5)
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg='Timeout waiting on %s (lid: %s)' % (server['LABEL'], server['LINODEID']))
module.fail_json(msg=f"Timeout waiting on {server['LABEL']} (lid: {server['LINODEID']})")
# Get a fresh copy of the server details
server = api.linode_list(LinodeId=server['LINODEID'])[0]
if server['STATUS'] == -2:
module.fail_json(msg='%s (lid: %s) failed to boot' %
(server['LABEL'], server['LINODEID']))
module.fail_json(msg=f"{server['LABEL']} (lid: {server['LINODEID']}) failed to boot")
# From now on we know the task is a success
# Build instance report
instance = getInstanceDetails(api, server)
@ -528,7 +527,7 @@ def linodeServers(module, api, state, name,
elif state in ('stopped',):
if not servers:
module.fail_json(msg='Server (lid: %s) not found' % (linode_id))
module.fail_json(msg=f'Server (lid: {linode_id}) not found')
for server in servers:
instance = getInstanceDetails(api, server)
@ -536,7 +535,7 @@ def linodeServers(module, api, state, name,
try:
res = api.linode_shutdown(LinodeId=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
instance['status'] = 'Stopping'
changed = True
else:
@ -545,14 +544,14 @@ def linodeServers(module, api, state, name,
elif state in ('restarted',):
if not servers:
module.fail_json(msg='Server (lid: %s) not found' % (linode_id))
module.fail_json(msg=f'Server (lid: {linode_id}) not found')
for server in servers:
instance = getInstanceDetails(api, server)
try:
res = api.linode_reboot(LinodeId=server['LINODEID'])
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
instance['status'] = 'Restarting'
changed = True
instances.append(instance)
@ -563,7 +562,7 @@ def linodeServers(module, api, state, name,
try:
api.linode_delete(LinodeId=server['LINODEID'], skipChecks=True)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
instance['status'] = 'Deleting'
changed = True
instances.append(instance)
@ -672,7 +671,7 @@ def main():
api = linode_api.Api(api_key)
api.test_echo()
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
module.fail_json(msg=f"{e.value[0]['ERRORMESSAGE']}", exception=traceback.format_exc())
linodeServers(module, api, state, name,
displaygroup, plan,

View file

@ -190,7 +190,7 @@ def create_linode(module, client, **kwargs):
try:
response = client.linode.instance_create(**kwargs)
except Exception as exception:
module.fail_json(msg='Unable to query the Linode API. Saw: %s' % exception)
module.fail_json(msg=f'Unable to query the Linode API. Saw: {exception}')
try:
if isinstance(response, tuple):
@ -215,7 +215,7 @@ def maybe_instance_from_label(module, client):
except IndexError:
return None
except Exception as exception:
module.fail_json(msg='Unable to query the Linode API. Saw: %s' % exception)
module.fail_json(msg=f'Unable to query the Linode API. Saw: {exception}')
def initialise_module():

View file

@ -277,7 +277,7 @@ def ss_parse(raw):
if len(lines) == 0 or not lines[0].startswith('Netid '):
# unexpected stdout from ss
raise EnvironmentError('Unknown stdout format of `ss`: {0}'.format(raw))
raise EnvironmentError(f'Unknown stdout format of `ss`: {raw}')
# skip headers (-H arg is not present on e.g. Ubuntu 16)
lines = lines[1:]
@ -294,8 +294,8 @@ def ss_parse(raw):
except ValueError:
# unexpected stdout from ss
raise EnvironmentError(
'Expected `ss` table layout "Netid, State, Recv-Q, Send-Q, Local Address:Port, Peer Address:Port" and \
optionally "Process", but got something else: {0}'.format(line)
'Expected `ss` table layout "Netid, State, Recv-Q, Send-Q, Local Address:Port, Peer Address:Port" and'
f'optionally "Process", but got something else: {line}'
)
conns = regex_conns.search(local_addr_port)
@ -395,7 +395,7 @@ def main():
break
if bin_path is None:
raise EnvironmentError('Unable to find any of the supported commands in PATH: {0}'.format(", ".join(sorted(commands_map))))
raise EnvironmentError(f"Unable to find any of the supported commands in PATH: {', '.join(sorted(commands_map))}")
# which ports are listening for connections?
args = commands_map[command]['args']

View file

@ -65,10 +65,10 @@ def gather_lldp(module):
path = path.split(".")
path_components, final = path[:-1], path[-1]
elif final in current_dict and isinstance(current_dict[final], str):
current_dict[final] += '\n' + entry
current_dict[final] += f"\n{entry}"
continue
elif final in current_dict and isinstance(current_dict[final], list):
current_dict[final][-1] += '\n' + entry
current_dict[final][-1] += f"\n{entry}"
continue
else:
continue

View file

@ -135,9 +135,7 @@ class LocaleGen(StateModuleHelper):
version="13.0.0", collection_name="community.general"
)
else:
self.do_raise('{0} and {1} are missing. Is the package "locales" installed?'.format(
VAR_LIB_LOCALES, ETC_LOCALE_GEN
))
self.do_raise(f'{VAR_LIB_LOCALES} and {ETC_LOCALE_GEN} are missing. Is the package "locales" installed?')
self.runner = locale_runner(self.module)
@ -176,7 +174,7 @@ class LocaleGen(StateModuleHelper):
locales_not_found = self.locale_get_not_present(locales_not_found)
if locales_not_found:
self.do_raise("The following locales you have entered are not available on your system: {0}".format(', '.join(locales_not_found)))
self.do_raise(f"The following locales you have entered are not available on your system: {', '.join(locales_not_found)}")
def is_present(self):
return not self.locale_get_not_present(self.vars.name)
@ -210,11 +208,11 @@ class LocaleGen(StateModuleHelper):
locale_regexes = []
for name in names:
search_string = r'^#?\s*%s (?P<charset>.+)' % re.escape(name)
search_string = rf'^#?\s*{re.escape(name)} (?P<charset>.+)'
if enabled:
new_string = r'%s \g<charset>' % (name)
new_string = rf'{name} \g<charset>'
else:
new_string = r'# %s \g<charset>' % (name)
new_string = rf'# {name} \g<charset>'
re_search = re.compile(search_string)
locale_regexes.append([re_search, new_string])

View file

@ -96,12 +96,12 @@ def follow_log(module, le_path, logs, name=None, logtype=None):
rc, out, err = module.run_command(cmd)
if not query_log_status(module, le_path, log):
module.fail_json(msg="failed to follow '%s': %s" % (log, err.strip()))
module.fail_json(msg=f"failed to follow '{log}': {err.strip()}")
followed_count += 1
if followed_count > 0:
module.exit_json(changed=True, msg="followed %d log(s)" % (followed_count,))
module.exit_json(changed=True, msg=f"followed {followed_count} log(s)")
module.exit_json(changed=False, msg="logs(s) already followed")
@ -122,12 +122,12 @@ def unfollow_log(module, le_path, logs):
rc, out, err = module.run_command([le_path, 'rm', log])
if query_log_status(module, le_path, log):
module.fail_json(msg="failed to remove '%s': %s" % (log, err.strip()))
module.fail_json(msg=f"failed to remove '{log}': {err.strip()}")
removed_count += 1
if removed_count > 0:
module.exit_json(changed=True, msg="removed %d package(s)" % removed_count)
module.exit_json(changed=True, msg=f"removed {removed_count} package(s)")
module.exit_json(changed=False, msg="logs(s) already unfollowed")

View file

@ -59,7 +59,7 @@ from ansible.module_utils.basic import AnsibleModule
def send_msg(module, token, msg, api, port):
message = "{0} {1}\n".format(token, msg)
message = f"{token} {msg}\n"
api_ip = socket.gethostbyname(api)
@ -69,7 +69,7 @@ def send_msg(module, token, msg, api, port):
if not module.check_mode:
s.send(message)
except Exception as e:
module.fail_json(msg="failed to send message, msg=%s" % e)
module.fail_json(msg=f"failed to send message, msg={e}")
s.close()
@ -93,7 +93,7 @@ def main():
send_msg(module, token, msg, api, port)
changed = True
except Exception as e:
module.fail_json(msg="unable to send msg: %s" % e)
module.fail_json(msg=f"unable to send msg: {e}")
module.exit_json(changed=changed, msg=msg)

View file

@ -105,7 +105,7 @@ def install_plugin(module, plugin_bin, plugin_name, version, proxy_host, proxy_p
cmd_args.extend(["--version", version])
if proxy_host and proxy_port:
cmd_args.extend(["-DproxyHost=%s" % proxy_host, "-DproxyPort=%s" % proxy_port])
cmd_args.extend([f"-DproxyHost={proxy_host}", f"-DproxyPort={proxy_port}"])
cmd = " ".join(cmd_args)

View file

@ -366,7 +366,7 @@ def reset_uuid_pv(module, device):
pvchange_rc, pvchange_out, pvchange_err = module.run_command(pvchange_cmd_with_opts)
dummy, new_uuid, dummy = module.run_command(pvs_cmd_with_opts, check_rc=True)
if orig_uuid.strip() == new_uuid.strip():
module.fail_json(msg="PV (%s) UUID change failed" % (device), rc=pvchange_rc, err=pvchange_err, out=pvchange_out)
module.fail_json(msg=f"PV ({device}) UUID change failed", rc=pvchange_rc, err=pvchange_err, out=pvchange_out)
else:
changed = True
@ -437,17 +437,17 @@ def main():
# check given devices
for test_dev in dev_list:
if not os.path.exists(test_dev):
module.fail_json(msg="Device %s not found." % test_dev)
module.fail_json(msg=f"Device {test_dev} not found.")
# get pv list
pvs_cmd = module.get_bin_path('pvs', True)
if dev_list:
pvs_filter_pv_name = ' || '.join(
'pv_name = {0}'.format(x)
f'pv_name = {x}'
for x in itertools.chain(dev_list, module.params['pvs'])
)
pvs_filter_vg_name = 'vg_name = {0}'.format(vg)
pvs_filter = ["--select", "{0} || {1}".format(pvs_filter_pv_name, pvs_filter_vg_name)]
pvs_filter_vg_name = f'vg_name = {vg}'
pvs_filter = ["--select", f"{pvs_filter_pv_name} || {pvs_filter_vg_name}"]
else:
pvs_filter = []
rc, current_pvs, err = module.run_command([pvs_cmd, "--noheadings", "-o", "pv_name,vg_name", "--separator", ";"] + pvs_filter)
@ -458,7 +458,7 @@ def main():
pvs = parse_pvs(module, current_pvs)
used_pvs = [pv for pv in pvs if pv['name'] in dev_list and pv['vg_name'] and pv['vg_name'] != vg]
if used_pvs:
module.fail_json(msg="Device %s is already in %s volume group." % (used_pvs[0]['name'], used_pvs[0]['vg_name']))
module.fail_json(msg=f"Device {used_pvs[0]['name']} is already in {used_pvs[0]['vg_name']} volume group.")
if this_vg is None:
if present_state:
@ -474,13 +474,13 @@ def main():
if rc == 0:
changed = True
else:
module.fail_json(msg="Creating physical volume '%s' failed" % current_dev, rc=rc, err=err)
module.fail_json(msg=f"Creating physical volume '{current_dev}' failed", rc=rc, err=err)
vgcreate_cmd = module.get_bin_path('vgcreate')
rc, dummy, err = module.run_command([vgcreate_cmd] + vgoptions + ['-s', pesize, vg] + dev_list)
if rc == 0:
changed = True
else:
module.fail_json(msg="Creating volume group '%s' failed" % vg, rc=rc, err=err)
module.fail_json(msg=f"Creating volume group '{vg}' failed", rc=rc, err=err)
else:
if state == 'absent':
if module.check_mode:
@ -493,9 +493,9 @@ def main():
if rc == 0:
module.exit_json(changed=True)
else:
module.fail_json(msg="Failed to remove volume group %s" % (vg), rc=rc, err=err)
module.fail_json(msg=f"Failed to remove volume group {vg}", rc=rc, err=err)
else:
module.fail_json(msg="Refuse to remove non-empty volume group %s without force=true" % (vg))
module.fail_json(msg=f"Refuse to remove non-empty volume group {vg} without force=true")
# activate/deactivate existing VG
elif state == 'active':
changed = activate_vg(module=module, vg=vg, active=True)
@ -535,14 +535,14 @@ def main():
if rc == 0:
changed = True
else:
module.fail_json(msg="Creating physical volume '%s' failed" % current_dev, rc=rc, err=err)
module.fail_json(msg=f"Creating physical volume '{current_dev}' failed", rc=rc, err=err)
# add PV to our VG
vgextend_cmd = module.get_bin_path('vgextend', True)
rc, dummy, err = module.run_command([vgextend_cmd, vg] + devs_to_add)
if rc == 0:
changed = True
else:
module.fail_json(msg="Unable to extend %s by %s." % (vg, ' '.join(devs_to_add)), rc=rc, err=err)
module.fail_json(msg=f"Unable to extend {vg} by {' '.join(devs_to_add)}.", rc=rc, err=err)
# remove some PV from our VG
if devs_to_remove:
@ -551,7 +551,7 @@ def main():
if rc == 0:
changed = True
else:
module.fail_json(msg="Unable to reduce %s by %s." % (vg, ' '.join(devs_to_remove)), rc=rc, err=err)
module.fail_json(msg=f"Unable to reduce {vg} by {' '.join(devs_to_remove)}.", rc=rc, err=err)
module.exit_json(changed=changed)

View file

@ -82,15 +82,15 @@ class LvgRename(object):
if old_vg_exists:
if new_vg_exists:
self.module.fail_json(msg='The new VG name (%s) is already in use.' % (self.vg_new))
self.module.fail_json(msg=f'The new VG name ({self.vg_new}) is already in use.')
else:
self._rename_vg()
else:
if new_vg_exists:
self.result['msg'] = 'The new VG (%s) already exists, nothing to do.' % (self.vg_new)
self.result['msg'] = f'The new VG ({self.vg_new}) already exists, nothing to do.'
self.module.exit_json(**self.result)
else:
self.module.fail_json(msg='Both current (%s) and new (%s) VG are missing.' % (self.vg, self.vg_new))
self.module.fail_json(msg=f'Both current ({self.vg}) and new ({self.vg_new}) VG are missing.')
self.module.exit_json(**self.result)
@ -141,7 +141,7 @@ class LvgRename(object):
self.result['diff'] = {'before': {'vg': self.vg}, 'after': {'vg': self.vg_new}}
if self.module.check_mode:
self.result['msg'] = "Running in check mode. The module would rename VG %s to %s." % (self.vg, self.vg_new)
self.result['msg'] = f"Running in check mode. The module would rename VG {self.vg} to {self.vg_new}."
self.result['changed'] = True
else:
vgrename_cmd_with_opts = [vgrename_cmd, self.vg, self.vg_new]

View file

@ -90,7 +90,7 @@ def get_pv_size(module, device):
def rescan_device(module, device):
"""Perform storage rescan for the device."""
base_device = os.path.basename(device)
is_partition = "/sys/class/block/{0}/partition".format(base_device)
is_partition = f"/sys/class/block/{base_device}/partition"
# Determine parent device if partition exists
parent_device = base_device
@ -101,10 +101,7 @@ def rescan_device(module, device):
)
# Determine rescan path
rescan_path = "/sys/block/{0}/device/{1}".format(
parent_device,
"rescan_controller" if base_device.startswith('nvme') else "rescan"
)
rescan_path = f"/sys/block/{parent_device}/device/{'rescan_controller' if base_device.startswith('nvme') else 'rescan'}"
if os.path.exists(rescan_path):
try:
@ -112,9 +109,9 @@ def rescan_device(module, device):
f.write('1')
return True
except IOError as e:
module.warn("Failed to rescan device {0}: {1}".format(device, str(e)))
module.warn(f"Failed to rescan device {device}: {e!s}")
else:
module.warn("Rescan path does not exist for device {0}".format(device))
module.warn(f"Rescan path does not exist for device {device}")
return False
@ -138,7 +135,7 @@ def main():
# Validate device existence for present state
if state == 'present' and not os.path.exists(device):
module.fail_json(msg="Device %s not found" % device)
module.fail_json(msg=f"Device {device} not found")
is_pv = get_pv_status(module, device)
@ -191,9 +188,9 @@ def main():
# Generate final message
if actions:
msg = "PV %s: %s" % (device, ', '.join(actions))
msg = f"PV {device}: {', '.join(actions)}"
else:
msg = "No changes needed for PV %s" % device
msg = f"No changes needed for PV {device}"
module.exit_json(changed=changed, msg=msg)

View file

@ -109,9 +109,9 @@ def main():
# Validate device existence
if not os.path.exists(source):
module.fail_json(msg="Source device %s not found" % source)
module.fail_json(msg=f"Source device {source} not found")
if not os.path.exists(destination):
module.fail_json(msg="Destination device %s not found" % destination)
module.fail_json(msg=f"Destination device {destination} not found")
if source == destination:
module.fail_json(msg="Source and destination devices must be different")
@ -120,7 +120,7 @@ def main():
rc, out, err = ctx.run(device=device)
if rc != 0:
module.fail_json(
msg="Command failed: %s" % err,
msg=f"Command failed: {err}",
stdout=out,
stderr=err,
rc=rc,
@ -136,22 +136,22 @@ def main():
return rc == 0
if not is_pv(source):
module.fail_json(msg="Source device %s is not a PV" % source)
module.fail_json(msg=f"Source device {source} is not a PV")
if not is_pv(destination):
module.fail_json(msg="Destination device %s is not a PV" % destination)
module.fail_json(msg=f"Destination device {destination} is not a PV")
vg_src = run_pvs_command("noheadings vg_name device", source)
vg_dest = run_pvs_command("noheadings vg_name device", destination)
if vg_src != vg_dest:
module.fail_json(
msg="Source and destination must be in the same VG. Source VG: '%s', Destination VG: '%s'." % (vg_src, vg_dest)
msg=f"Source and destination must be in the same VG. Source VG: '{vg_src}', Destination VG: '{vg_dest}'."
)
def get_allocated_pe(device):
try:
return int(run_pvs_command("noheadings pv_pe_alloc_count device", device))
except ValueError:
module.fail_json(msg="Invalid allocated PE count for device %s" % device)
module.fail_json(msg=f"Invalid allocated PE count for device {device}")
allocated = get_allocated_pe(source)
if allocated == 0:
@ -162,7 +162,7 @@ def main():
try:
return int(run_pvs_command("noheadings pv_pe_count device", device))
except ValueError:
module.fail_json(msg="Invalid total PE count for device %s" % device)
module.fail_json(msg=f"Invalid total PE count for device {device}")
def get_free_pe(device):
return get_total_pe(device) - get_allocated_pe(device)
@ -170,13 +170,13 @@ def main():
free_pe_dest = get_free_pe(destination)
if free_pe_dest < allocated:
module.fail_json(
msg="Destination device %s has only %d free physical extents, but source device %s has %d allocated extents. Not enough space." %
(destination, free_pe_dest, source, allocated)
msg=(f"Destination device {destination} has only {int(free_pe_dest)} free physical extents, but "
f"source device {source} has {int(allocated)} allocated extents. Not enough space.")
)
if module.check_mode:
changed = True
actions.append('would move data from %s to %s' % (source, destination))
actions.append(f'would move data from {source} to {destination}')
else:
pvmove_runner = CmdRunner(
module,
@ -185,7 +185,7 @@ def main():
auto_answer=cmd_runner_fmt.as_bool("-y"),
atomic=cmd_runner_fmt.as_bool("--atomic"),
autobackup=cmd_runner_fmt.as_fixed("--autobackup", "y" if module.params['autobackup'] else "n"),
verbosity=cmd_runner_fmt.as_func(lambda v: ['-' + 'v' * v] if v > 0 else []),
verbosity=cmd_runner_fmt.as_func(lambda v: [f"-{'v' * v}"] if v > 0 else []),
source=cmd_runner_fmt.as_list(),
destination=cmd_runner_fmt.as_list(),
)
@ -202,14 +202,14 @@ def main():
result['stderr'] = err
changed = True
actions.append('moved data from %s to %s' % (source, destination))
actions.append(f'moved data from {source} to {destination}')
result['changed'] = changed
result['actions'] = actions
if actions:
result['msg'] = "PV data move: %s" % ", ".join(actions)
result['msg'] = f"PV data move: {', '.join(actions)}"
else:
result['msg'] = "No data to move from %s" % source
result['msg'] = f"No data to move from {source}"
module.exit_json(**result)

View file

@ -379,7 +379,7 @@ def main():
if not size[0].isdigit():
raise ValueError()
except ValueError:
module.fail_json(msg="Bad size specification of '%s'" % size)
module.fail_json(msg=f"Bad size specification of '{size}'")
# when no unit, megabytes by default
if size_opt == 'l':
@ -394,9 +394,9 @@ def main():
if rc != 0:
if state == 'absent':
module.exit_json(changed=False, stdout="Volume group %s does not exist." % vg)
module.exit_json(changed=False, stdout=f"Volume group {vg} does not exist.")
else:
module.fail_json(msg="Volume group %s does not exist." % vg, rc=rc, err=err)
module.fail_json(msg=f"Volume group {vg} does not exist.", rc=rc, err=err)
vgs = parse_vgs(current_vgs)
this_vg = vgs[0]
@ -408,9 +408,9 @@ def main():
if rc != 0:
if state == 'absent':
module.exit_json(changed=False, stdout="Volume group %s does not exist." % vg)
module.exit_json(changed=False, stdout=f"Volume group {vg} does not exist.")
else:
module.fail_json(msg="Volume group %s does not exist." % vg, rc=rc, err=err)
module.fail_json(msg=f"Volume group {vg} does not exist.", rc=rc, err=err)
changed = False
@ -425,7 +425,7 @@ def main():
else:
module.fail_json(msg="Snapshots of thin pool LVs are not supported.")
else:
module.fail_json(msg="Snapshot origin LV %s does not exist in volume group %s." % (lv, vg))
module.fail_json(msg=f"Snapshot origin LV {lv} does not exist in volume group {vg}.")
check_lv = snapshot
elif thinpool:
if lv:
@ -434,7 +434,7 @@ def main():
if test_lv['name'] == thinpool:
break
else:
module.fail_json(msg="Thin pool LV %s does not exist in volume group %s." % (thinpool, vg))
module.fail_json(msg=f"Thin pool LV {thinpool} does not exist in volume group {vg}.")
check_lv = lv
else:
check_lv = thinpool
@ -453,7 +453,7 @@ def main():
if state == 'present':
if size_operator is not None:
if size_operator == "-" or (size_whole not in ["VG", "PVS", "FREE", "ORIGIN", None]):
module.fail_json(msg="Bad size specification of '%s%s' for creating LV" % (size_operator, size))
module.fail_json(msg=f"Bad size specification of '{size_operator}{size}' for creating LV")
# Require size argument except for snapshot of thin volumes
if (lv or thinpool) and not size:
for test_lv in lvs:
@ -467,36 +467,36 @@ def main():
cmd = [lvcreate_cmd] + test_opt + yesopt
if snapshot is not None:
if size:
cmd += ["-%s" % size_opt, "%s%s" % (size, size_unit)]
cmd += ["-s", "-n", snapshot] + opts + ["%s/%s" % (vg, lv)]
cmd += [f"-{size_opt}", f"{size}{size_unit}"]
cmd += ["-s", "-n", snapshot] + opts + [f"{vg}/{lv}"]
elif thinpool:
if lv:
if size_opt == 'l':
module.fail_json(changed=False, msg="Thin volume sizing with percentage not supported.")
size_opt = 'V'
cmd += ["-n", lv]
cmd += ["-%s" % size_opt, "%s%s" % (size, size_unit)]
cmd += opts + ["-T", "%s/%s" % (vg, thinpool)]
cmd += [f"-{size_opt}", f"{size}{size_unit}"]
cmd += opts + ["-T", f"{vg}/{thinpool}"]
else:
cmd += ["-n", lv]
cmd += ["-%s" % size_opt, "%s%s" % (size, size_unit)]
cmd += [f"-{size_opt}", f"{size}{size_unit}"]
cmd += opts + [vg] + pvs
rc, dummy, err = module.run_command(cmd)
if rc == 0:
changed = True
else:
module.fail_json(msg="Creating logical volume '%s' failed" % lv, rc=rc, err=err)
module.fail_json(msg=f"Creating logical volume '{lv}' failed", rc=rc, err=err)
else:
if state == 'absent':
# remove LV
if not force:
module.fail_json(msg="Sorry, no removal of logical volume %s without force=true." % (this_lv['name']))
module.fail_json(msg=f"Sorry, no removal of logical volume {this_lv['name']} without force=true.")
lvremove_cmd = module.get_bin_path("lvremove", required=True)
rc, dummy, err = module.run_command([lvremove_cmd] + test_opt + ["--force", "%s/%s" % (vg, this_lv['name'])])
rc, dummy, err = module.run_command([lvremove_cmd] + test_opt + ["--force", f"{vg}/{this_lv['name']}"])
if rc == 0:
module.exit_json(changed=True)
else:
module.fail_json(msg="Failed to remove logical volume %s" % (lv), rc=rc, err=err)
module.fail_json(msg=f"Failed to remove logical volume {lv}", rc=rc, err=err)
elif not size:
pass
@ -523,14 +523,14 @@ def main():
tool = [module.get_bin_path("lvextend", required=True)]
else:
module.fail_json(
msg="Logical Volume %s could not be extended. Not enough free space left (%s%s required / %s%s available)" %
(this_lv['name'], (size_requested - this_lv['size']), unit, size_free, unit)
msg=(f"Logical Volume {this_lv['name']} could not be extended. Not enough free space left "
f"({size_requested - this_lv['size']}{unit} required / {size_free}{unit} available)")
)
elif shrink and this_lv['size'] > size_requested + this_vg['ext_size']: # more than an extent too large
if size_requested < 1:
module.fail_json(msg="Sorry, no shrinking of %s to 0 permitted." % (this_lv['name']))
module.fail_json(msg=f"Sorry, no shrinking of {this_lv['name']} to 0 permitted.")
elif not force:
module.fail_json(msg="Sorry, no shrinking of %s without force=true" % (this_lv['name']))
module.fail_json(msg=f"Sorry, no shrinking of {this_lv['name']} without force=true")
else:
tool = [module.get_bin_path("lvreduce", required=True), '--force']
@ -539,22 +539,22 @@ def main():
tool += ['--resizefs']
cmd = tool + test_opt
if size_operator:
cmd += ["-%s" % size_opt, "%s%s%s" % (size_operator, size, size_unit)]
cmd += [f"-{size_opt}", f"{size_operator}{size}{size_unit}"]
else:
cmd += ["-%s" % size_opt, "%s%s" % (size, size_unit)]
cmd += ["%s/%s" % (vg, this_lv['name'])] + pvs
cmd += [f"-{size_opt}", f"{size}{size_unit}"]
cmd += [f"{vg}/{this_lv['name']}"] + pvs
rc, out, err = module.run_command(cmd)
if "Reached maximum COW size" in out:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err, out=out)
module.fail_json(msg=f"Unable to resize {lv} to {size}{size_unit}", rc=rc, err=err, out=out)
elif rc == 0:
changed = True
msg = "Volume %s resized to %s%s" % (this_lv['name'], size_requested, unit)
msg = f"Volume {this_lv['name']} resized to {size_requested}{unit}"
elif "matches existing size" in err or "matches existing size" in out:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'])
elif "not larger than existing size" in err or "not larger than existing size" in out:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'], msg="Original size is larger than requested size", err=err)
else:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err)
module.fail_json(msg=f"Unable to resize {lv} to {size}{size_unit}", rc=rc, err=err)
else:
# resize LV based on absolute values
@ -563,9 +563,9 @@ def main():
tool = [module.get_bin_path("lvextend", required=True)]
elif shrink and float(size) < this_lv['size'] or size_operator == '-':
if float(size) == 0:
module.fail_json(msg="Sorry, no shrinking of %s to 0 permitted." % (this_lv['name']))
module.fail_json(msg=f"Sorry, no shrinking of {this_lv['name']} to 0 permitted.")
if not force:
module.fail_json(msg="Sorry, no shrinking of %s without force=true." % (this_lv['name']))
module.fail_json(msg=f"Sorry, no shrinking of {this_lv['name']} without force=true.")
else:
tool = [module.get_bin_path("lvreduce", required=True), '--force']
@ -574,13 +574,13 @@ def main():
tool += ['--resizefs']
cmd = tool + test_opt
if size_operator:
cmd += ["-%s" % size_opt, "%s%s%s" % (size_operator, size, size_unit)]
cmd += [f"-{size_opt}", f"{size_operator}{size}{size_unit}"]
else:
cmd += ["-%s" % size_opt, "%s%s" % (size, size_unit)]
cmd += ["%s/%s" % (vg, this_lv['name'])] + pvs
cmd += [f"-{size_opt}", f"{size}{size_unit}"]
cmd += [f"{vg}/{this_lv['name']}"] + pvs
rc, out, err = module.run_command(cmd)
if "Reached maximum COW size" in out:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err, out=out)
module.fail_json(msg=f"Unable to resize {lv} to {size}{size_unit}", rc=rc, err=err, out=out)
elif rc == 0:
changed = True
elif "matches existing size" in err or "matches existing size" in out:
@ -588,23 +588,23 @@ def main():
elif "not larger than existing size" in err or "not larger than existing size" in out:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'], msg="Original size is larger than requested size", err=err)
else:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err)
module.fail_json(msg=f"Unable to resize {lv} to {size}{size_unit}", rc=rc, err=err)
if this_lv is not None:
if active:
lvchange_cmd = module.get_bin_path("lvchange", required=True)
rc, dummy, err = module.run_command([lvchange_cmd, "-ay", "%s/%s" % (vg, this_lv['name'])])
rc, dummy, err = module.run_command([lvchange_cmd, "-ay", f"{vg}/{this_lv['name']}"])
if rc == 0:
module.exit_json(changed=((not this_lv['active']) or changed), vg=vg, lv=this_lv['name'], size=this_lv['size'])
else:
module.fail_json(msg="Failed to activate logical volume %s" % (lv), rc=rc, err=err)
module.fail_json(msg=f"Failed to activate logical volume {lv}", rc=rc, err=err)
else:
lvchange_cmd = module.get_bin_path("lvchange", required=True)
rc, dummy, err = module.run_command([lvchange_cmd, "-an", "%s/%s" % (vg, this_lv['name'])])
rc, dummy, err = module.run_command([lvchange_cmd, "-an", f"{vg}/{this_lv['name']}"])
if rc == 0:
module.exit_json(changed=(this_lv['active'] or changed), vg=vg, lv=this_lv['name'], size=this_lv['size'])
else:
module.fail_json(msg="Failed to deactivate logical volume %s" % (lv), rc=rc, err=err)
module.fail_json(msg=f"Failed to deactivate logical volume {lv}", rc=rc, err=err)
module.exit_json(changed=changed, msg=msg)

View file

@ -697,8 +697,8 @@ class LxcContainerManagement(object):
for key, value in parsed_options:
key = key.strip()
value = value.strip()
new_entry = '%s = %s\n' % (key, value)
keyre = re.compile(r'%s(\s+)?=' % key)
new_entry = f'{key} = {value}\n'
keyre = re.compile(rf'{key}(\s+)?=')
for option_line in container_config:
# Look for key in config
if keyre.match(option_line):
@ -784,7 +784,7 @@ class LxcContainerManagement(object):
rc, return_data, err = self.module.run_command(build_command)
if rc != 0:
message = "Failed executing %s." % os.path.basename(clone_cmd)
message = f"Failed executing {os.path.basename(clone_cmd)}."
self.failure(
err=err, rc=rc, msg=message, command=' '.join(
build_command
@ -843,7 +843,7 @@ class LxcContainerManagement(object):
build_command.extend([
'--logfile',
os.path.join(
log_path, 'lxc-%s.log' % self.container_name
log_path, f'lxc-{self.container_name}.log'
),
'--logpriority',
self.module.params.get(
@ -938,11 +938,9 @@ class LxcContainerManagement(object):
time.sleep(1)
self.failure(
lxc_container=self._container_data(),
error='Failed to start container [ %s ]' % self.container_name,
error=f'Failed to start container [ {self.container_name} ]',
rc=1,
msg='The container [ %s ] failed to start. Check to lxc is'
' available and that the container is in a functional'
' state.' % self.container_name
msg=f'The container [ {self.container_name} ] failed to start. Check to lxc is available and that the container is in a functional state.'
)
def _check_archive(self):
@ -1002,12 +1000,10 @@ class LxcContainerManagement(object):
else:
self.failure(
lxc_container=self._container_data(),
error='Failed to destroy container'
' [ %s ]' % self.container_name,
error=f'Failed to destroy container [ {self.container_name} ]',
rc=1,
msg='The container [ %s ] failed to be destroyed. Check'
' that lxc is available and that the container is in a'
' functional state.' % self.container_name
msg=(f'The container [ {self.container_name} ] failed to be destroyed. '
'Check that lxc is available and that the container is in a functional state.')
)
def _frozen(self, count=0):
@ -1129,12 +1125,9 @@ class LxcContainerManagement(object):
elif not self._container_startup():
self.failure(
lxc_container=self._container_data(),
error='Failed to start container'
' [ %s ]' % self.container_name,
error=f'Failed to start container [ {self.container_name} ]',
rc=1,
msg='The container [ %s ] failed to start. Check to lxc is'
' available and that the container is in a functional'
' state.' % self.container_name
msg=f'The container [ {self.container_name} ] failed to start. Check to lxc is available and that the container is in a functional state.'
)
# Return data
@ -1210,7 +1203,7 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to read vg %s' % vg_name,
msg=f'failed to read vg {vg_name}',
command=' '.join(build_command)
)
@ -1241,7 +1234,7 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to read lv %s' % lv,
msg=f'failed to read lv {lv}',
command=' '.join(build_command)
)
@ -1267,8 +1260,7 @@ class LxcContainerManagement(object):
if free_space < float(snapshot_size_gb):
message = (
'Snapshot size [ %s ] is > greater than [ %s ] on volume group'
' [ %s ]' % (snapshot_size_gb, free_space, vg)
f'Snapshot size [ {snapshot_size_gb} ] is > greater than [ {free_space} ] on volume group [ {vg} ]'
)
self.failure(
error='Not enough space to create snapshot',
@ -1283,15 +1275,14 @@ class LxcContainerManagement(object):
snapshot_name,
"-s",
os.path.join(vg, source_lv),
"-L%sg" % snapshot_size_gb
f"-L{snapshot_size_gb}g"
]
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
rc=rc,
msg='Failed to Create LVM snapshot %s/%s --> %s'
% (vg, source_lv, snapshot_name)
msg=f'Failed to Create LVM snapshot {vg}/{source_lv} --> {snapshot_name}'
)
def _lvm_lv_mount(self, lv_name, mount_point):
@ -1307,7 +1298,7 @@ class LxcContainerManagement(object):
build_command = [
self.module.get_bin_path('mount', True),
"/dev/%s/%s" % (vg, lv_name),
f"/dev/{vg}/{lv_name}",
mount_point,
]
rc, stdout, err = self.module.run_command(build_command)
@ -1315,8 +1306,7 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to mountlvm lv %s/%s to %s'
% (vg, lv_name, mount_point)
msg=f'failed to mountlvm lv {vg}/{lv_name} to {mount_point}'
)
def _create_tar(self, source_dir):
@ -1336,17 +1326,11 @@ class LxcContainerManagement(object):
compression_type = LXC_COMPRESSION_MAP[archive_compression]
# remove trailing / if present.
archive_name = '%s.%s' % (
os.path.join(
archive_path,
self.container_name
),
compression_type['extension']
)
archive_name = f"{os.path.join(archive_path, self.container_name)}.{compression_type['extension']}"
build_command = [
self.module.get_bin_path('tar', True),
'--directory=%s' % os.path.realpath(source_dir),
f'--directory={os.path.realpath(source_dir)}',
compression_type['argument'],
archive_name,
'.'
@ -1379,14 +1363,14 @@ class LxcContainerManagement(object):
build_command = [
self.module.get_bin_path('lvremove', True),
"-f",
"%s/%s" % (vg, lv_name),
f"{vg}/{lv_name}",
]
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
rc=rc,
msg='Failed to remove LVM LV %s/%s' % (vg, lv_name),
msg=f'Failed to remove LVM LV {vg}/{lv_name}',
command=' '.join(build_command)
)
@ -1442,7 +1426,7 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to unmount [ %s ]' % mount_point,
msg=f'failed to unmount [ {mount_point} ]',
command=' '.join(build_command)
)
@ -1460,7 +1444,7 @@ class LxcContainerManagement(object):
build_command = [
self.module.get_bin_path('mount', True),
'-t', 'overlayfs',
'-o', 'lowerdir=%s,upperdir=%s' % (lowerdir, upperdir),
'-o', f'lowerdir={lowerdir},upperdir={upperdir}',
'overlayfs',
mount_point,
]
@ -1469,8 +1453,7 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to mount overlayfs:%s:%s to %s -- Command: %s'
% (lowerdir, upperdir, mount_point, build_command)
msg=f'failed to mount overlayfs:{lowerdir}:{upperdir} to {mount_point} -- Command: {build_command}'
)
def _container_create_tar(self):
@ -1506,7 +1489,7 @@ class LxcContainerManagement(object):
mount_point = os.path.join(work_dir, 'rootfs')
# Set the snapshot name if needed
snapshot_name = '%s_lxc_snapshot' % self.container_name
snapshot_name = f'{self.container_name}_lxc_snapshot'
container_state = self._get_state()
try:
@ -1542,11 +1525,9 @@ class LxcContainerManagement(object):
)
else:
self.failure(
err='snapshot [ %s ] already exists' % snapshot_name,
err=f'snapshot [ {snapshot_name} ] already exists',
rc=1,
msg='The snapshot [ %s ] already exists. Please clean'
' up old snapshot of containers before continuing.'
% snapshot_name
msg=f'The snapshot [ {snapshot_name} ] already exists. Please clean up old snapshot of containers before continuing.'
)
elif overlayfs_backed:
lowerdir, upperdir = lxc_rootfs.split(':')[1:]
@ -1581,11 +1562,9 @@ class LxcContainerManagement(object):
def check_count(self, count, method):
if count > 1:
self.failure(
error='Failed to %s container' % method,
error=f'Failed to {method} container',
rc=1,
msg='The container [ %s ] failed to %s. Check to lxc is'
' available and that the container is in a functional'
' state.' % (self.container_name, method)
msg=f'The container [ {self.container_name} ] failed to {method}. Check to lxc is available and that the container is in a functional state.'
)
def failure(self, **kwargs):

View file

@ -479,10 +479,10 @@ class LXDContainerManagement(object):
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = '{0}/.config/lxc/client.key'.format(os.environ['HOME'])
self.key_file = f"{os.environ['HOME']}/.config/lxc/client.key"
self.cert_file = self.module.params.get('client_cert')
if self.cert_file is None:
self.cert_file = '{0}/.config/lxc/client.crt'.format(os.environ['HOME'])
self.cert_file = f"{os.environ['HOME']}/.config/lxc/client.crt"
self.debug = self.module._verbosity >= 4
try:
@ -506,7 +506,7 @@ class LXDContainerManagement(object):
# LXD (3.19) Rest API provides instances endpoint, failback to containers and virtual-machines
# https://documentation.ubuntu.com/lxd/en/latest/rest-api/#instances-containers-and-virtual-machines
self.api_endpoint = '/1.0/instances'
check_api_endpoint = self.client.do('GET', '{0}?project='.format(self.api_endpoint), ok_error_codes=[404])
check_api_endpoint = self.client.do('GET', f'{self.api_endpoint}?project=', ok_error_codes=[404])
if check_api_endpoint['error_code'] == 404:
if self.type == 'container':
@ -528,15 +528,15 @@ class LXDContainerManagement(object):
self.config[attr] = param_val
def _get_instance_json(self):
url = '{0}/{1}'.format(self.api_endpoint, self.name)
url = f'{self.api_endpoint}/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
return self.client.do('GET', url, ok_error_codes=[404])
def _get_instance_state_json(self):
url = '{0}/{1}/state'.format(self.api_endpoint, self.name)
url = f'{self.api_endpoint}/{self.name}/state'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
return self.client.do('GET', url, ok_error_codes=[404])
@staticmethod
@ -546,9 +546,9 @@ class LXDContainerManagement(object):
return ANSIBLE_LXD_STATES[resp_json['metadata']['status']]
def _change_state(self, action, force_stop=False):
url = '{0}/{1}/state'.format(self.api_endpoint, self.name)
url = f'{self.api_endpoint}/{self.name}/state'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
body_json = {'action': action, 'timeout': self.timeout}
if force_stop:
body_json['force'] = True
@ -563,7 +563,7 @@ class LXDContainerManagement(object):
if self.project:
url_params['project'] = self.project
if url_params:
url = '{0}?{1}'.format(url, urlencode(url_params))
url = f'{url}?{urlencode(url_params)}'
config = self.config.copy()
config['name'] = self.name
if self.type not in self.api_endpoint:
@ -585,9 +585,9 @@ class LXDContainerManagement(object):
self.actions.append('restart')
def _delete_instance(self):
url = '{0}/{1}'.format(self.api_endpoint, self.name)
url = f'{self.api_endpoint}/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
if not self.module.check_mode:
self.client.do('DELETE', url)
self.actions.append('delete')
@ -732,9 +732,9 @@ class LXDContainerManagement(object):
else:
body_json[param] = self.config[param]
self.diff['after']['instance'] = body_json
url = '{0}/{1}'.format(self.api_endpoint, self.name)
url = f'{self.api_endpoint}/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
if not self.module.check_mode:
self.client.do('PUT', url, body_json=body_json)
self.actions.append('apply_instance_configs')

View file

@ -256,10 +256,10 @@ class LXDProfileManagement(object):
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = '{0}/.config/lxc/client.key'.format(os.environ['HOME'])
self.key_file = f"{os.environ['HOME']}/.config/lxc/client.key"
self.cert_file = self.module.params.get('client_cert')
if self.cert_file is None:
self.cert_file = '{0}/.config/lxc/client.crt'.format(os.environ['HOME'])
self.cert_file = f"{os.environ['HOME']}/.config/lxc/client.crt"
self.debug = self.module._verbosity >= 4
try:
@ -290,9 +290,9 @@ class LXDProfileManagement(object):
self.config[attr] = param_val
def _get_profile_json(self):
url = '/1.0/profiles/{0}'.format(self.name)
url = f'/1.0/profiles/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
return self.client.do('GET', url, ok_error_codes=[404])
@staticmethod
@ -327,16 +327,16 @@ class LXDProfileManagement(object):
def _create_profile(self):
url = '/1.0/profiles'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
config = self.config.copy()
config['name'] = self.name
self.client.do('POST', url, config)
self.actions.append('create')
def _rename_profile(self):
url = '/1.0/profiles/{0}'.format(self.name)
url = f'/1.0/profiles/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
config = {'name': self.new_name}
self.client.do('POST', url, config)
self.actions.append('rename')
@ -445,16 +445,16 @@ class LXDProfileManagement(object):
config = self._generate_new_config(config)
# upload config to lxd
url = '/1.0/profiles/{0}'.format(self.name)
url = f'/1.0/profiles/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
self.client.do('PUT', url, config)
self.actions.append('apply_profile_configs')
def _delete_profile(self):
url = '/1.0/profiles/{0}'.format(self.name)
url = f'/1.0/profiles/{self.name}'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
url = f'{url}?{urlencode(dict(project=self.project))}'
self.client.do('DELETE', url)
self.actions.append('delete')

View file

@ -246,7 +246,7 @@ class LXDProjectManagement(object):
def _get_project_json(self):
return self.client.do(
'GET', '/1.0/projects/{0}'.format(self.name),
'GET', f'/1.0/projects/{self.name}',
ok_error_codes=[404]
)
@ -287,7 +287,7 @@ class LXDProjectManagement(object):
def _rename_project(self):
config = {'name': self.new_name}
self.client.do('POST', '/1.0/projects/{0}'.format(self.name), config)
self.client.do('POST', f'/1.0/projects/{self.name}', config)
self.actions.append('rename')
self.name = self.new_name
@ -357,11 +357,11 @@ class LXDProjectManagement(object):
else:
config = self.config.copy()
# upload config to lxd
self.client.do('PUT', '/1.0/projects/{0}'.format(self.name), config)
self.client.do('PUT', f'/1.0/projects/{self.name}', config)
self.actions.append('apply_projects_configs')
def _delete_project(self):
self.client.do('DELETE', '/1.0/projects/{0}'.format(self.name))
self.client.do('DELETE', f'/1.0/projects/{self.name}')
self.actions.append('delete')
def run(self):

View file

@ -155,7 +155,7 @@ def query_port(module, port_path, name, state="present"):
rc, out, err = module.run_command([port_path, "-q", "installed", name])
if rc == 0 and out.strip().startswith(name + " "):
if rc == 0 and out.strip().startswith(f"{name} "):
return True
return False
@ -184,13 +184,13 @@ def remove_ports(module, port_path, ports, stdout, stderr):
stdout += out
stderr += err
if query_port(module, port_path, port):
module.fail_json(msg="Failed to remove %s: %s" % (port, err), stdout=stdout, stderr=stderr)
module.fail_json(msg=f"Failed to remove {port}: {err}", stdout=stdout, stderr=stderr)
remove_c += 1
if remove_c > 0:
module.exit_json(changed=True, msg="Removed %s port(s)" % remove_c, stdout=stdout, stderr=stderr)
module.exit_json(changed=True, msg=f"Removed {remove_c} port(s)", stdout=stdout, stderr=stderr)
module.exit_json(changed=False, msg="Port(s) already absent", stdout=stdout, stderr=stderr)
@ -208,12 +208,12 @@ def install_ports(module, port_path, ports, variant, stdout, stderr):
stdout += out
stderr += err
if not query_port(module, port_path, port):
module.fail_json(msg="Failed to install %s: %s" % (port, err), stdout=stdout, stderr=stderr)
module.fail_json(msg=f"Failed to install {port}: {err}", stdout=stdout, stderr=stderr)
install_c += 1
if install_c > 0:
module.exit_json(changed=True, msg="Installed %s port(s)" % (install_c), stdout=stdout, stderr=stderr)
module.exit_json(changed=True, msg=f"Installed {install_c} port(s)", stdout=stdout, stderr=stderr)
module.exit_json(changed=False, msg="Port(s) already present", stdout=stdout, stderr=stderr)
@ -225,7 +225,7 @@ def activate_ports(module, port_path, ports, stdout, stderr):
for port in ports:
if not query_port(module, port_path, port):
module.fail_json(msg="Failed to activate %s, port(s) not present" % (port), stdout=stdout, stderr=stderr)
module.fail_json(msg=f"Failed to activate {port}, port(s) not present", stdout=stdout, stderr=stderr)
if query_port(module, port_path, port, state="active"):
continue
@ -235,12 +235,12 @@ def activate_ports(module, port_path, ports, stdout, stderr):
stderr += err
if not query_port(module, port_path, port, state="active"):
module.fail_json(msg="Failed to activate %s: %s" % (port, err), stdout=stdout, stderr=stderr)
module.fail_json(msg=f"Failed to activate {port}: {err}", stdout=stdout, stderr=stderr)
activate_c += 1
if activate_c > 0:
module.exit_json(changed=True, msg="Activated %s port(s)" % (activate_c), stdout=stdout, stderr=stderr)
module.exit_json(changed=True, msg=f"Activated {activate_c} port(s)", stdout=stdout, stderr=stderr)
module.exit_json(changed=False, msg="Port(s) already active", stdout=stdout, stderr=stderr)
@ -252,7 +252,7 @@ def deactivate_ports(module, port_path, ports, stdout, stderr):
for port in ports:
if not query_port(module, port_path, port):
module.fail_json(msg="Failed to deactivate %s, port(s) not present" % (port), stdout=stdout, stderr=stderr)
module.fail_json(msg=f"Failed to deactivate {port}, port(s) not present", stdout=stdout, stderr=stderr)
if not query_port(module, port_path, port, state="active"):
continue
@ -261,12 +261,12 @@ def deactivate_ports(module, port_path, ports, stdout, stderr):
stdout += out
stderr += err
if query_port(module, port_path, port, state="active"):
module.fail_json(msg="Failed to deactivate %s: %s" % (port, err), stdout=stdout, stderr=stderr)
module.fail_json(msg=f"Failed to deactivate {port}: {err}", stdout=stdout, stderr=stderr)
deactivated_c += 1
if deactivated_c > 0:
module.exit_json(changed=True, msg="Deactivated %s port(s)" % (deactivated_c), stdout=stdout, stderr=stderr)
module.exit_json(changed=True, msg=f"Deactivated {deactivated_c} port(s)", stdout=stdout, stderr=stderr)
module.exit_json(changed=False, msg="Port(s) already inactive", stdout=stdout, stderr=stderr)

View file

@ -290,8 +290,7 @@ def main():
secure_state = True
except ssl.SSLError as e:
if secure == 'always':
module.fail_json(rc=1, msg='Unable to start an encrypted session to %s:%s: %s' %
(host, port, to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f'Unable to start an encrypted session to {host}:{port}: {to_native(e)}', exception=traceback.format_exc())
except Exception:
pass
@ -300,12 +299,12 @@ def main():
code, smtpmessage = smtp.connect(host, port)
except smtplib.SMTPException as e:
module.fail_json(rc=1, msg='Unable to Connect %s:%s: %s' % (host, port, to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f'Unable to Connect {host}:{port}: {to_native(e)}', exception=traceback.format_exc())
try:
smtp.ehlo()
except smtplib.SMTPException as e:
module.fail_json(rc=1, msg='Helo failed for host %s:%s: %s' % (host, port, to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f'Helo failed for host {host}:{port}: {to_native(e)}', exception=traceback.format_exc())
if int(code) > 0:
if not secure_state and secure in ('starttls', 'try'):
@ -314,26 +313,25 @@ def main():
smtp.starttls()
secure_state = True
except smtplib.SMTPException as e:
module.fail_json(rc=1, msg='Unable to start an encrypted session to %s:%s: %s' %
(host, port, to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f'Unable to start an encrypted session to {host}:{port}: {e}', exception=traceback.format_exc())
try:
smtp.ehlo()
except smtplib.SMTPException as e:
module.fail_json(rc=1, msg='Helo failed for host %s:%s: %s' % (host, port, to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f'Helo failed for host {host}:{port}: {e}', exception=traceback.format_exc())
else:
if secure == 'starttls':
module.fail_json(rc=1, msg='StartTLS is not offered on server %s:%s' % (host, port))
module.fail_json(rc=1, msg=f'StartTLS is not offered on server {host}:{port}')
if username and password:
if smtp.has_extn('AUTH'):
try:
smtp.login(username, password)
except smtplib.SMTPAuthenticationError:
module.fail_json(rc=1, msg='Authentication to %s:%s failed, please check your username and/or password' % (host, port))
module.fail_json(rc=1, msg=f'Authentication to {host}:{port} failed, please check your username and/or password')
except smtplib.SMTPException:
module.fail_json(rc=1, msg='No Suitable authentication method was found on %s:%s' % (host, port))
module.fail_json(rc=1, msg=f'No Suitable authentication method was found on {host}:{port}')
else:
module.fail_json(rc=1, msg="No Authentication on the server at %s:%s" % (host, port))
module.fail_json(rc=1, msg=f"No Authentication on the server at {host}:{port}")
if not secure_state and (username and password):
module.warn('Username and Password was sent without encryption')
@ -353,7 +351,7 @@ def main():
h_val = to_native(Header(h_val, charset))
msg.add_header(h_key, h_val)
except Exception:
module.warn("Skipping header '%s', unable to parse" % hdr)
module.warn(f"Skipping header '{hdr}', unable to parse")
if 'X-Mailer' not in msg:
msg.add_header('X-Mailer', 'Ansible mail module')
@ -374,7 +372,7 @@ def main():
addr_list.append(parseaddr(addr)[1]) # address only, w/o phrase
msg['Cc'] = ", ".join(cc_list)
part = MIMEText(body + "\n\n", _subtype=subtype, _charset=charset)
part = MIMEText(f"{body}\n\n", _subtype=subtype, _charset=charset)
msg.attach(part)
# NOTE: Backward compatibility with old syntax using space as delimiter is not retained
@ -388,22 +386,20 @@ def main():
part.add_header('Content-disposition', 'attachment', filename=os.path.basename(filename))
msg.attach(part)
except Exception as e:
module.fail_json(rc=1, msg="Failed to send community.general.mail: can't attach file %s: %s" %
(filename, to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f"Failed to send community.general.mail: can't attach file {filename}: {e}", exception=traceback.format_exc())
composed = msg.as_string()
try:
result = smtp.sendmail(sender_addr, set(addr_list), composed)
except Exception as e:
module.fail_json(rc=1, msg="Failed to send mail to '%s': %s" %
(", ".join(set(addr_list)), to_native(e)), exception=traceback.format_exc())
module.fail_json(rc=1, msg=f"Failed to send mail to '{', '.join(set(addr_list))}': {e}", exception=traceback.format_exc())
smtp.quit()
if result:
for key in result:
module.warn("Failed to send mail to '%s': %s %s" % (key, result[key][0], result[key][1]))
module.warn(f"Failed to send mail to '{key}': {result[key][0]} {result[key][1]}")
module.exit_json(msg='Failed to send mail to at least one recipient', result=result)
module.exit_json(msg='Mail sent successfully', result=result)

View file

@ -199,7 +199,7 @@ def main():
# Fall back to system make
make_path = module.get_bin_path('make', required=True)
if module.params['params'] is not None:
make_parameters = [k + (('=' + str(v)) if v is not None else '') for k, v in module.params['params'].items()]
make_parameters = [k + (f"={v!s}" if v is not None else '') for k, v in module.params['params'].items()]
else:
make_parameters = []

View file

@ -96,15 +96,15 @@ class ManageIQAlertProfiles(object):
self.module = self.manageiq.module
self.api_url = self.manageiq.api_url
self.client = self.manageiq.client
self.url = '{api_url}/alert_definition_profiles'.format(api_url=self.api_url)
self.url = f'{self.api_url}/alert_definition_profiles'
def get_profiles(self):
""" Get all alert profiles from ManageIQ
"""
try:
response = self.client.get(self.url + '?expand=alert_definitions,resources')
response = self.client.get(f"{self.url}?expand=alert_definitions,resources")
except Exception as e:
self.module.fail_json(msg="Failed to query alert profiles: {error}".format(error=e))
self.module.fail_json(msg=f"Failed to query alert profiles: {e}")
return response.get('resources') or []
def get_alerts(self, alert_descriptions):
@ -136,7 +136,7 @@ class ManageIQAlertProfiles(object):
try:
result = self.client.post(self.url, resource=profile_dict, action="create")
except Exception as e:
self.module.fail_json(msg="Creating profile failed {error}".format(error=e))
self.module.fail_json(msg=f"Creating profile failed {e}")
# now that it has been created, we can assign the alerts
self.assign_or_unassign(result['results'][0], alerts, "assign")
@ -151,28 +151,26 @@ class ManageIQAlertProfiles(object):
try:
self.client.post(profile['href'], action="delete")
except Exception as e:
self.module.fail_json(msg="Deleting profile failed: {error}".format(error=e))
self.module.fail_json(msg=f"Deleting profile failed: {e}")
msg = "Successfully deleted profile {name}".format(name=profile['name'])
msg = f"Successfully deleted profile {profile['name']}"
return dict(changed=True, msg=msg)
def get_alert_href(self, alert):
""" Get an absolute href for an alert
"""
return "{url}/alert_definitions/{id}".format(url=self.api_url, id=alert['id'])
return f"{self.api_url}/alert_definitions/{alert['id']}"
def assign_or_unassign(self, profile, resources, action):
""" Assign or unassign alerts to profile, and validate the result.
"""
alerts = [dict(href=href) for href in resources]
subcollection_url = profile['href'] + '/alert_definitions'
subcollection_url = f"{profile['href']}/alert_definitions"
try:
result = self.client.post(subcollection_url, resources=alerts, action=action)
if len(result['results']) != len(alerts):
msg = "Failed to {action} alerts to profile '{name}'," +\
"expected {expected} alerts to be {action}ed," +\
"but only {changed} were {action}ed"
msg = "Failed to {action} alerts to profile '{name}',expected {expected} alerts to be {action}ed,but only {changed} were {action}ed"
msg = msg.format(action=action,
name=profile['name'],
expected=len(alerts),
@ -190,7 +188,7 @@ class ManageIQAlertProfiles(object):
"""
changed = False
# we need to use client.get to query the alert definitions
old_profile = self.client.get(old_profile['href'] + '?expand=alert_definitions')
old_profile = self.client.get(f"{old_profile['href']}?expand=alert_definitions")
# figure out which alerts we need to assign / unassign
# alerts listed by the user:
@ -242,9 +240,9 @@ class ManageIQAlertProfiles(object):
self.module.fail_json(msg=msg)
if changed:
msg = "Profile {name} updated successfully".format(name=desired_profile['name'])
msg = f"Profile {desired_profile['name']} updated successfully"
else:
msg = "No update needed for profile {name}".format(name=desired_profile['name'])
msg = f"No update needed for profile {desired_profile['name']}"
return dict(changed=changed, msg=msg)

View file

@ -169,15 +169,15 @@ class ManageIQAlerts(object):
self.module = self.manageiq.module
self.api_url = self.manageiq.api_url
self.client = self.manageiq.client
self.alerts_url = '{api_url}/alert_definitions'.format(api_url=self.api_url)
self.alerts_url = f'{self.api_url}/alert_definitions'
def get_alerts(self):
""" Get all alerts from ManageIQ
"""
try:
response = self.client.get(self.alerts_url + '?expand=resources')
response = self.client.get(f"{self.alerts_url}?expand=resources")
except Exception as e:
self.module.fail_json(msg="Failed to query alerts: {error}".format(error=e))
self.module.fail_json(msg=f"Failed to query alerts: {e}")
return response.get('resources', [])
def validate_hash_expression(self, expression):
@ -186,7 +186,7 @@ class ManageIQAlerts(object):
# hash expressions must have the following fields
for key in ['options', 'eval_method', 'mode']:
if key not in expression:
msg = "Hash expression is missing required field {key}".format(key=key)
msg = f"Hash expression is missing required field {key}"
self.module.fail_json(msg)
def create_alert_dict(self, params):
@ -234,8 +234,7 @@ class ManageIQAlerts(object):
""" Delete an alert
"""
try:
result = self.client.post('{url}/{id}'.format(url=self.alerts_url,
id=alert['id']),
result = self.client.post(f"{self.alerts_url}/{alert['id']}",
action="delete")
msg = "Alert {description} deleted: {details}"
msg = msg.format(description=alert['description'], details=result)
@ -254,7 +253,7 @@ class ManageIQAlerts(object):
return dict(changed=False, msg="No update needed")
else:
try:
url = '{url}/{id}'.format(url=self.alerts_url, id=existing_alert['id'])
url = f"{self.alerts_url}/{existing_alert['id']}"
result = self.client.post(url, action="edit", resource=new_alert)
# make sure that the update was indeed successful by comparing

View file

@ -241,15 +241,15 @@ class ManageIQgroup(object):
if tenant_id:
tenant = self.client.get_entity('tenants', tenant_id)
if not tenant:
self.module.fail_json(msg="Tenant with id '%s' not found in manageiq" % str(tenant_id))
self.module.fail_json(msg=f"Tenant with id '{tenant_id}' not found in manageiq")
return tenant
else:
if tenant_name:
tenant_res = self.client.collections.tenants.find_by(name=tenant_name)
if not tenant_res:
self.module.fail_json(msg="Tenant '%s' not found in manageiq" % tenant_name)
self.module.fail_json(msg=f"Tenant '{tenant_name}' not found in manageiq")
if len(tenant_res) > 1:
self.module.fail_json(msg="Multiple tenants found in manageiq with name '%s'" % tenant_name)
self.module.fail_json(msg=f"Multiple tenants found in manageiq with name '{tenant_name}'")
tenant = tenant_res[0]
return tenant
else:
@ -266,15 +266,15 @@ class ManageIQgroup(object):
if role_id:
role = self.client.get_entity('roles', role_id)
if not role:
self.module.fail_json(msg="Role with id '%s' not found in manageiq" % str(role_id))
self.module.fail_json(msg=f"Role with id '{role_id}' not found in manageiq")
return role
else:
if role_name:
role_res = self.client.collections.roles.find_by(name=role_name)
if not role_res:
self.module.fail_json(msg="Role '%s' not found in manageiq" % role_name)
self.module.fail_json(msg=f"Role '{role_name}' not found in manageiq")
if len(role_res) > 1:
self.module.fail_json(msg="Multiple roles found in manageiq with name '%s'" % role_name)
self.module.fail_json(msg=f"Multiple roles found in manageiq with name '{role_name}'")
return role_res[0]
else:
# No role name or role id supplied
@ -316,17 +316,17 @@ class ManageIQgroup(object):
msg: a short message describing the operation executed.
"""
try:
url = '%s/groups/%s' % (self.api_url, group['id'])
url = f"{self.api_url}/groups/{group['id']}"
result = self.client.post(url, action='delete')
except Exception as e:
self.module.fail_json(msg="failed to delete group %s: %s" % (group['description'], str(e)))
self.module.fail_json(msg=f"failed to delete group {group['description']}: {e}")
if result['success'] is False:
self.module.fail_json(msg=result['message'])
return dict(
changed=True,
msg="deleted group %s with id %s" % (group['description'], group['id']))
msg=f"deleted group {group['description']} with id {group['id']}")
def edit_group(self, group, description, role, tenant, norm_managed_filters, managed_filters_merge_mode,
belongsto_filters, belongsto_filters_merge_mode):
@ -383,18 +383,18 @@ class ManageIQgroup(object):
if not changed:
return dict(
changed=False,
msg="group %s is not changed." % group['description'])
msg=f"group {group['description']} is not changed.")
# try to update group
try:
self.client.post(group['href'], action='edit', resource=resource)
changed = True
except Exception as e:
self.module.fail_json(msg="failed to update group %s: %s" % (group['name'], str(e)))
self.module.fail_json(msg=f"failed to update group {group['name']}: {e!s}")
return dict(
changed=changed,
msg="successfully updated the group %s with id %s" % (group['description'], group['id']))
msg=f"successfully updated the group {group['description']} with id {group['id']}")
def edit_group_edit_filters(self, current_filters, norm_managed_filters, managed_filters_merge_mode,
belongsto_filters, belongsto_filters_merge_mode):
@ -457,9 +457,9 @@ class ManageIQgroup(object):
# check for required arguments
for key, value in dict(description=description).items():
if value in (None, ''):
self.module.fail_json(msg="missing required argument: %s" % key)
self.module.fail_json(msg=f"missing required argument: {key}")
url = '%s/groups' % self.api_url
url = f'{self.api_url}/groups'
resource = {'description': description}
@ -476,11 +476,11 @@ class ManageIQgroup(object):
try:
result = self.client.post(url, action='create', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to create group %s: %s" % (description, str(e)))
self.module.fail_json(msg=f"failed to create group {description}: {e}")
return dict(
changed=True,
msg="successfully created group %s" % description,
msg=f"successfully created group {description}",
group_id=result['results'][0]['id']
)
@ -514,9 +514,9 @@ class ManageIQgroup(object):
for cat_key in managed_filters:
cat_array = []
if not isinstance(managed_filters[cat_key], list):
module.fail_json(msg='Entry "{0}" of managed_filters must be a list!'.format(cat_key))
module.fail_json(msg=f'Entry "{cat_key}" of managed_filters must be a list!')
for tags in managed_filters[cat_key]:
miq_managed_tag = "/managed/" + cat_key + "/" + tags
miq_managed_tag = f"/managed/{cat_key}/{tags}"
cat_array.append(miq_managed_tag)
# Do not add empty categories. ManageIQ will remove all categories that are not supplied
if cat_array:
@ -609,7 +609,7 @@ def main():
else:
res_args = dict(
changed=False,
msg="group '%s' does not exist in manageiq" % description)
msg=f"group '{description}' does not exist in manageiq")
# group should exist
if state == "present":

View file

@ -629,7 +629,7 @@ class ManageIQProvider(object):
zone = self.manageiq.find_collection_resource_by('zones', name=name)
if not zone: # zone doesn't exist
self.module.fail_json(
msg="zone %s does not exist in manageiq" % (name))
msg=f"zone {name} does not exist in manageiq")
return zone['id']
@ -661,7 +661,7 @@ class ManageIQProvider(object):
endpoint = endpoints.get(endpoint_key)
if endpoint:
# get role and authtype
role = endpoint.get('role') or provider_defaults.get(endpoint_key + '_role', 'default')
role = endpoint.get('role') or provider_defaults.get(f"{endpoint_key}_role", 'default')
if role == 'default':
authtype = provider_defaults.get('authtype') or role
else:
@ -695,10 +695,10 @@ class ManageIQProvider(object):
a short message describing the operation executed.
"""
try:
url = '%s/providers/%s' % (self.api_url, provider['id'])
url = f"{self.api_url}/providers/{provider['id']}"
result = self.client.post(url, action='delete')
except Exception as e:
self.module.fail_json(msg="failed to delete provider %s: %s" % (provider['name'], str(e)))
self.module.fail_json(msg=f"failed to delete provider {provider['name']}: {e}")
return dict(changed=True, msg=result['message'])
@ -710,7 +710,7 @@ class ManageIQProvider(object):
Returns:
a short message describing the operation executed.
"""
url = '%s/providers/%s' % (self.api_url, provider['id'])
url = f"{self.api_url}/providers/{provider['id']}"
resource = dict(
name=name,
@ -739,11 +739,11 @@ class ManageIQProvider(object):
try:
result = self.client.post(url, action='edit', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to update provider %s: %s" % (provider['name'], str(e)))
self.module.fail_json(msg=f"failed to update provider {provider['name']}: {e}")
return dict(
changed=True,
msg="successfully updated the provider %s: %s" % (provider['name'], result))
msg=f"successfully updated the provider {provider['name']}: {result}")
def create_provider(self, name, provider_type, endpoints, zone_id, provider_region,
host_default_vnc_port_start, host_default_vnc_port_end,
@ -772,14 +772,14 @@ class ManageIQProvider(object):
# try to create a new provider
try:
url = '%s/providers' % (self.api_url)
url = f'{self.api_url}/providers'
result = self.client.post(url, type=supported_providers()[provider_type]['class_name'], **resource)
except Exception as e:
self.module.fail_json(msg="failed to create provider %s: %s" % (name, str(e)))
self.module.fail_json(msg=f"failed to create provider {name}: {e}")
return dict(
changed=True,
msg="successfully created the provider %s: %s" % (name, result['results']))
msg=f"successfully created the provider {name}: {result['results']}")
def refresh(self, provider, name):
""" Trigger provider refresh.
@ -788,14 +788,14 @@ class ManageIQProvider(object):
a short message describing the operation executed.
"""
try:
url = '%s/providers/%s' % (self.api_url, provider['id'])
url = f"{self.api_url}/providers/{provider['id']}"
result = self.client.post(url, action='refresh')
except Exception as e:
self.module.fail_json(msg="failed to refresh provider %s: %s" % (name, str(e)))
self.module.fail_json(msg=f"failed to refresh provider {name}: {e}")
return dict(
changed=True,
msg="refreshing provider %s" % name)
msg=f"refreshing provider {name}")
def main():
@ -858,7 +858,7 @@ def main():
else:
res_args = dict(
changed=False,
msg="provider %s: does not exist in manageiq" % (name))
msg=f"provider {name}: does not exist in manageiq")
# provider should exist
if state == "present":
@ -878,7 +878,7 @@ def main():
# check supported_providers types
if provider_type not in supported_providers().keys():
manageiq_provider.module.fail_json(
msg="provider_type %s is not supported" % (provider_type))
msg=f"provider_type {provider_type} is not supported")
# build "connection_configurations" objects from user requested endpoints
# "provider" is a required endpoint, if we have it, we have endpoints
@ -903,7 +903,7 @@ def main():
else:
res_args = dict(
changed=False,
msg="provider %s: does not exist in manageiq" % (name))
msg=f"provider {name}: does not exist in manageiq")
module.exit_json(**res_args)

View file

@ -189,7 +189,7 @@ class ManageIQTenant(object):
if parent_id:
parent_tenant_res = self.client.collections.tenants.find_by(id=parent_id)
if not parent_tenant_res:
self.module.fail_json(msg="Parent tenant with id '%s' not found in manageiq" % str(parent_id))
self.module.fail_json(msg=f"Parent tenant with id '{parent_id}' not found in manageiq")
parent_tenant = parent_tenant_res[0]
tenants = self.client.collections.tenants.find_by(name=name)
@ -209,10 +209,10 @@ class ManageIQTenant(object):
if parent:
parent_tenant_res = self.client.collections.tenants.find_by(name=parent)
if not parent_tenant_res:
self.module.fail_json(msg="Parent tenant '%s' not found in manageiq" % parent)
self.module.fail_json(msg=f"Parent tenant '{parent}' not found in manageiq")
if len(parent_tenant_res) > 1:
self.module.fail_json(msg="Multiple parent tenants not found in manageiq with name '%s'" % parent)
self.module.fail_json(msg=f"Multiple parent tenants not found in manageiq with name '{parent}'")
parent_tenant = parent_tenant_res[0]
parent_id = int(parent_tenant['id'])
@ -254,10 +254,10 @@ class ManageIQTenant(object):
dict with `msg` and `changed`
"""
try:
url = '%s/tenants/%s' % (self.api_url, tenant['id'])
url = f"{self.api_url}/tenants/{tenant['id']}"
result = self.client.post(url, action='delete')
except Exception as e:
self.module.fail_json(msg="failed to delete tenant %s: %s" % (tenant['name'], str(e)))
self.module.fail_json(msg=f"failed to delete tenant {tenant['name']}: {e}")
if result['success'] is False:
self.module.fail_json(msg=result['message'])
@ -276,18 +276,18 @@ class ManageIQTenant(object):
if self.compare_tenant(tenant, name, description):
return dict(
changed=False,
msg="tenant %s is not changed." % tenant['name'],
msg=f"tenant {tenant['name']} is not changed.",
tenant=tenant['_data'])
# try to update tenant
try:
result = self.client.post(tenant['href'], action='edit', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to update tenant %s: %s" % (tenant['name'], str(e)))
self.module.fail_json(msg=f"failed to update tenant {tenant['name']}: {e}")
return dict(
changed=True,
msg="successfully updated the tenant with id %s" % (tenant['id']))
msg=f"successfully updated the tenant with id {tenant['id']}")
def create_tenant(self, name, description, parent_tenant):
""" Creates the tenant in manageiq.
@ -299,9 +299,9 @@ class ManageIQTenant(object):
# check for required arguments
for key, value in dict(name=name, description=description, parent_id=parent_id).items():
if value in (None, ''):
self.module.fail_json(msg="missing required argument: %s" % key)
self.module.fail_json(msg=f"missing required argument: {key}")
url = '%s/tenants' % self.api_url
url = f'{self.api_url}/tenants'
resource = {'name': name, 'description': description, 'parent': {'id': parent_id}}
@ -309,11 +309,11 @@ class ManageIQTenant(object):
result = self.client.post(url, action='create', resource=resource)
tenant_id = result['results'][0]['id']
except Exception as e:
self.module.fail_json(msg="failed to create tenant %s: %s" % (name, str(e)))
self.module.fail_json(msg=f"failed to create tenant {name}: {e}")
return dict(
changed=True,
msg="successfully created tenant '%s' with id '%s'" % (name, tenant_id),
msg=f"successfully created tenant '{name}' with id '{tenant_id}'",
tenant_id=tenant_id)
def tenant_quota(self, tenant, quota_key):
@ -322,7 +322,7 @@ class ManageIQTenant(object):
the quota for the tenant, or None if the tenant quota was not found.
"""
tenant_quotas = self.client.get("%s/quotas?expand=resources&filter[]=name=%s" % (tenant['href'], quota_key))
tenant_quotas = self.client.get(f"{tenant['href']}/quotas?expand=resources&filter[]=name={quota_key}")
return tenant_quotas['resources']
@ -332,7 +332,7 @@ class ManageIQTenant(object):
the quotas for the tenant, or None if no tenant quotas were not found.
"""
tenant_quotas = self.client.get("%s/quotas?expand=resources" % (tenant['href']))
tenant_quotas = self.client.get(f"{tenant['href']}/quotas?expand=resources")
return tenant_quotas['resources']
@ -366,7 +366,7 @@ class ManageIQTenant(object):
if current_quota:
res = self.delete_tenant_quota(tenant, current_quota)
else:
res = dict(changed=False, msg="tenant quota '%s' does not exist" % quota_key)
res = dict(changed=False, msg=f"tenant quota '{quota_key}' does not exist")
if res['changed']:
changed = True
@ -387,19 +387,19 @@ class ManageIQTenant(object):
if current_quota['value'] == quota_value:
return dict(
changed=False,
msg="tenant quota %s already has value %s" % (quota_key, quota_value))
msg=f"tenant quota {quota_key} already has value {quota_value}")
else:
url = '%s/quotas/%s' % (tenant['href'], current_quota['id'])
url = f"{tenant['href']}/quotas/{current_quota['id']}"
resource = {'value': quota_value}
try:
self.client.post(url, action='edit', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to update tenant quota %s: %s" % (quota_key, str(e)))
self.module.fail_json(msg=f"failed to update tenant quota {quota_key}: {e}")
return dict(
changed=True,
msg="successfully updated tenant quota %s" % quota_key)
msg=f"successfully updated tenant quota {quota_key}")
def create_tenant_quota(self, tenant, quota_key, quota_value):
""" Creates the tenant quotas in manageiq.
@ -407,16 +407,16 @@ class ManageIQTenant(object):
Returns:
result
"""
url = '%s/quotas' % (tenant['href'])
url = f"{tenant['href']}/quotas"
resource = {'name': quota_key, 'value': quota_value}
try:
self.client.post(url, action='create', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to create tenant quota %s: %s" % (quota_key, str(e)))
self.module.fail_json(msg=f"failed to create tenant quota {quota_key}: {e}")
return dict(
changed=True,
msg="successfully created tenant quota %s" % quota_key)
msg=f"successfully created tenant quota {quota_key}")
def delete_tenant_quota(self, tenant, quota):
""" deletes the tenant quotas in manageiq.
@ -427,7 +427,7 @@ class ManageIQTenant(object):
try:
result = self.client.post(quota['href'], action='delete')
except Exception as e:
self.module.fail_json(msg="failed to delete tenant quota '%s': %s" % (quota['name'], str(e)))
self.module.fail_json(msg=f"failed to delete tenant quota '{quota['name']}': {e}")
return dict(changed=True, msg=result['message'])
@ -512,9 +512,9 @@ def main():
# if we do not have a tenant, nothing to do
else:
if parent_id:
msg = "tenant '%s' with parent_id %i does not exist in manageiq" % (name, parent_id)
msg = f"tenant '{name}' with parent_id {int(parent_id)} does not exist in manageiq"
else:
msg = "tenant '%s' with parent '%s' does not exist in manageiq" % (name, parent)
msg = f"tenant '{name}' with parent '{parent}' does not exist in manageiq"
res_args = dict(
changed=False,

View file

@ -154,7 +154,7 @@ class ManageIQUser(object):
group = self.manageiq.find_collection_resource_by('groups', description=description)
if not group: # group doesn't exist
self.module.fail_json(
msg="group %s does not exist in manageiq" % (description))
msg=f"group {description} does not exist in manageiq")
return group['id']
@ -188,10 +188,10 @@ class ManageIQUser(object):
a short message describing the operation executed.
"""
try:
url = '%s/users/%s' % (self.api_url, user['id'])
url = f"{self.api_url}/users/{user['id']}"
result = self.client.post(url, action='delete')
except Exception as e:
self.module.fail_json(msg="failed to delete user %s: %s" % (user['userid'], str(e)))
self.module.fail_json(msg=f"failed to delete user {user['userid']}: {e}")
return dict(changed=True, msg=result['message'])
@ -202,7 +202,7 @@ class ManageIQUser(object):
a short message describing the operation executed.
"""
group_id = None
url = '%s/users/%s' % (self.api_url, user['id'])
url = f"{self.api_url}/users/{user['id']}"
resource = dict(userid=user['userid'])
if group is not None:
@ -224,17 +224,17 @@ class ManageIQUser(object):
if self.compare_user(user, name, group_id, password, email):
return dict(
changed=False,
msg="user %s is not changed." % (user['userid']))
msg=f"user {user['userid']} is not changed.")
# try to update user
try:
result = self.client.post(url, action='edit', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to update user %s: %s" % (user['userid'], str(e)))
self.module.fail_json(msg=f"failed to update user {user['userid']}: {e}")
return dict(
changed=True,
msg="successfully updated the user %s: %s" % (user['userid'], result))
msg=f"successfully updated the user {user['userid']}: {result}")
def create_user(self, userid, name, group, password, email):
""" Creates the user in manageiq.
@ -246,10 +246,10 @@ class ManageIQUser(object):
# check for required arguments
for key, value in dict(name=name, group=group, password=password).items():
if value in (None, ''):
self.module.fail_json(msg="missing required argument: %s" % (key))
self.module.fail_json(msg=f"missing required argument: {key}")
group_id = self.group_id(group)
url = '%s/users' % (self.api_url)
url = f'{self.api_url}/users'
resource = {'userid': userid, 'name': name, 'password': password, 'group': {'id': group_id}}
if email is not None:
@ -259,11 +259,11 @@ class ManageIQUser(object):
try:
result = self.client.post(url, action='create', resource=resource)
except Exception as e:
self.module.fail_json(msg="failed to create user %s: %s" % (userid, str(e)))
self.module.fail_json(msg=f"failed to create user {userid}: {e}")
return dict(
changed=True,
msg="successfully created the user %s: %s" % (userid, result['results']))
msg=f"successfully created the user {userid}: {result['results']}")
def main():
@ -305,7 +305,7 @@ def main():
else:
res_args = dict(
changed=False,
msg="user %s: does not exist in manageiq" % (userid))
msg=f"user {userid}: does not exist in manageiq")
# user should exist
if state == "present":

View file

@ -140,11 +140,11 @@ class Mas(object):
rc, out, err = self.run([command, str(id)])
if rc != 0:
self.module.fail_json(
msg="Error running command '{0}' on app '{1}': {2}".format(command, str(id), out.rstrip())
msg=f"Error running command '{command}' on app '{id}': {out.rstrip()}"
)
# No error or dry run
self.__dict__['count_' + command] += 1
self.__dict__[f"count_{command}"] += 1
def check_mas_tool(self):
''' Verifies that the `mas` tool is available in a recent version '''
@ -156,7 +156,7 @@ class Mas(object):
# Is the version recent enough?
rc, out, err = self.run(['version'])
if rc != 0 or not out.strip() or LooseVersion(out.strip()) < LooseVersion('1.5.0'):
self.module.fail_json(msg='`mas` tool in version 1.5.0+ needed, got ' + out.strip())
self.module.fail_json(msg=f"`mas` tool in version 1.5.0+ needed, got {out.strip()}")
def check_signin(self):
''' Verifies that the user is signed in to the Mac App Store '''
@ -178,11 +178,11 @@ class Mas(object):
msgs = []
if self.count_install > 0:
msgs.append('Installed {0} app(s)'.format(self.count_install))
msgs.append(f'Installed {self.count_install} app(s)')
if self.count_upgrade > 0:
msgs.append('Upgraded {0} app(s)'.format(self.count_upgrade))
msgs.append(f'Upgraded {self.count_upgrade} app(s)')
if self.count_uninstall > 0:
msgs.append('Uninstalled {0} app(s)'.format(self.count_uninstall))
msgs.append(f'Uninstalled {self.count_uninstall} app(s)')
if msgs:
self.result['changed'] = True
@ -250,7 +250,7 @@ class Mas(object):
rc, out, err = self.run(['upgrade'])
if rc != 0:
self.module.fail_json(msg='Could not upgrade all apps: ' + out.rstrip())
self.module.fail_json(msg=f"Could not upgrade all apps: {out.rstrip()}")
self.count_upgrade += len(outdated)

View file

@ -150,7 +150,7 @@ def main():
result = dict(changed=False, msg="OK")
# define webhook
webhook_url = "{0}/hooks/{1}".format(module.params['url'], module.params['api_key'])
webhook_url = f"{module.params['url']}/hooks/{module.params['api_key']}"
result['webhook_url'] = webhook_url
# define payload
@ -182,7 +182,7 @@ def main():
# something's wrong
if info['status'] != 200:
# some problem
result['msg'] = "Failed to send mattermost message, the error was: {0}".format(info['msg'])
result['msg'] = f"Failed to send mattermost message, the error was: {info['msg']}"
module.fail_json(**result)
# Looks good

View file

@ -335,15 +335,15 @@ class Artifact(object):
if with_version and self.version:
timestamp_version_match = re.match("^(.*-)?([0-9]{8}\\.[0-9]{6}-[0-9]+)$", self.version)
if timestamp_version_match:
base = posixpath.join(base, timestamp_version_match.group(1) + "SNAPSHOT")
base = posixpath.join(base, f"{timestamp_version_match.group(1)}SNAPSHOT")
else:
base = posixpath.join(base, self.version)
return base
def _generate_filename(self):
filename = self.artifact_id + "-" + self.classifier + "." + self.extension
filename = f"{self.artifact_id}-{self.classifier}.{self.extension}"
if not self.classifier:
filename = self.artifact_id + "." + self.extension
filename = f"{self.artifact_id}.{self.extension}"
return filename
def get_filename(self, filename=None):
@ -354,11 +354,11 @@ class Artifact(object):
return filename
def __str__(self):
result = "%s:%s:%s" % (self.group_id, self.artifact_id, self.version)
result = f"{self.group_id}:{self.artifact_id}:{self.version}"
if self.classifier:
result = "%s:%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.classifier, self.version)
result = f"{self.group_id}:{self.artifact_id}:{self.extension}:{self.classifier}:{self.version}"
elif self.extension != "jar":
result = "%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.version)
result = f"{self.group_id}:{self.artifact_id}:{self.extension}:{self.version}"
return result
@staticmethod
@ -388,13 +388,13 @@ class MavenDownloader:
self.base = base
self.local = local
self.headers = headers
self.user_agent = "Ansible {0} maven_artifact".format(ansible_version)
self.user_agent = f"Ansible {ansible_version} maven_artifact"
self.latest_version_found = None
self.metadata_file_name = "maven-metadata-local.xml" if local else "maven-metadata.xml"
def find_version_by_spec(self, artifact):
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
path = f"/{artifact.path(False)}/{self.metadata_file_name}"
content = self._getContent(self.base + path, f"Failed to retrieve the maven metadata file: {path}")
xml = etree.fromstring(content)
original_versions = xml.xpath("/metadata/versioning/versions/version/text()")
versions = []
@ -427,7 +427,7 @@ class MavenDownloader:
selected_version = spec.select(versions)
if not selected_version:
raise ValueError("No version found with this spec version: {0}".format(artifact.version_by_spec))
raise ValueError(f"No version found with this spec version: {artifact.version_by_spec}")
# To deal when repos on maven don't have patch number on first build (e.g. 3.8 instead of 3.8.0)
if str(selected_version) not in original_versions:
@ -435,13 +435,13 @@ class MavenDownloader:
return str(selected_version)
raise ValueError("The spec version {0} is not supported! ".format(artifact.version_by_spec))
raise ValueError(f"The spec version {artifact.version_by_spec} is not supported! ")
def find_latest_version_available(self, artifact):
if self.latest_version_found:
return self.latest_version_found
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
path = f"/{artifact.path(False)}/{self.metadata_file_name}"
content = self._getContent(self.base + path, f"Failed to retrieve the maven metadata file: {path}")
xml = etree.fromstring(content)
v = xml.xpath("/metadata/versioning/versions/version[last()]/text()")
if v:
@ -458,8 +458,8 @@ class MavenDownloader:
if artifact.is_snapshot():
if self.local:
return self._uri_for_artifact(artifact, artifact.version)
path = "/%s/%s" % (artifact.path(), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
path = f"/{artifact.path()}/{self.metadata_file_name}"
content = self._getContent(self.base + path, f"Failed to retrieve the maven metadata file: {path}")
xml = etree.fromstring(content)
for snapshotArtifact in xml.xpath("/metadata/versioning/snapshotVersions/snapshotVersion"):
@ -473,19 +473,19 @@ class MavenDownloader:
if timestamp_xmlpath:
timestamp = timestamp_xmlpath[0]
build_number = xml.xpath("/metadata/versioning/snapshot/buildNumber/text()")[0]
return self._uri_for_artifact(artifact, artifact.version.replace("SNAPSHOT", timestamp + "-" + build_number))
return self._uri_for_artifact(artifact, artifact.version.replace("SNAPSHOT", f"{timestamp}-{build_number}"))
return self._uri_for_artifact(artifact, artifact.version)
def _uri_for_artifact(self, artifact, version=None):
if artifact.is_snapshot() and not version:
raise ValueError("Expected uniqueversion for snapshot artifact " + str(artifact))
raise ValueError(f"Expected uniqueversion for snapshot artifact {artifact}")
elif not artifact.is_snapshot():
version = artifact.version
if artifact.classifier:
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "-" + artifact.classifier + "." + artifact.extension)
return posixpath.join(self.base, artifact.path(), f"{artifact.artifact_id}-{version}-{artifact.classifier}.{artifact.extension}")
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "." + artifact.extension)
return posixpath.join(self.base, artifact.path(), f"{artifact.artifact_id}-{version}.{artifact.extension}")
# for small files, directly get the full content
def _getContent(self, url, failmsg, force=True):
@ -495,7 +495,7 @@ class MavenDownloader:
with io.open(parsed_url.path, 'rb') as f:
return f.read()
if force:
raise ValueError(failmsg + " because can not find file: " + url)
raise ValueError(f"{failmsg} because can not find file: {url}")
return None
response = self._request(url, failmsg, force)
if response:
@ -536,7 +536,7 @@ class MavenDownloader:
if info['status'] == 200:
return response
if force:
raise ValueError(failmsg + " because of " + info['msg'] + "for URL " + url_to_use)
raise ValueError(f"{failmsg} because of {info['msg']}for URL {url_to_use}")
return None
def download(self, tmpdir, artifact, verify_download, filename=None, checksum_alg='md5'):
@ -553,9 +553,9 @@ class MavenDownloader:
if os.path.isfile(parsed_url.path):
shutil.copy2(parsed_url.path, tempname)
else:
return "Can not find local file: " + parsed_url.path
return f"Can not find local file: {parsed_url.path}"
else:
response = self._request(url, "Failed to download artifact " + str(artifact))
response = self._request(url, f"Failed to download artifact {artifact}")
with os.fdopen(tempfd, 'wb') as f:
shutil.copyfileobj(response, f)
@ -581,11 +581,11 @@ class MavenDownloader:
remote_checksum = self._local_checksum(checksum_alg, parsed_url.path)
else:
try:
remote_checksum = to_text(self._getContent(remote_url + '.' + checksum_alg, "Failed to retrieve checksum", False), errors='strict')
remote_checksum = to_text(self._getContent(f"{remote_url}.{checksum_alg}", "Failed to retrieve checksum", False), errors='strict')
except UnicodeError as e:
return "Cannot retrieve a valid %s checksum from %s: %s" % (checksum_alg, remote_url, to_native(e))
return f"Cannot retrieve a valid {checksum_alg} checksum from {remote_url}: {to_native(e)}"
if not remote_checksum:
return "Cannot find %s checksum from %s" % (checksum_alg, remote_url)
return f"Cannot find {checksum_alg} checksum from {remote_url}"
try:
# Check if remote checksum only contains md5/sha1 or md5/sha1 + filename
_remote_checksum = remote_checksum.split(None, 1)[0]
@ -597,9 +597,9 @@ class MavenDownloader:
if local_checksum.lower() == remote_checksum.lower():
return None
else:
return "Checksum does not match: we computed " + local_checksum + " but the repository states " + remote_checksum
return f"Checksum does not match: we computed {local_checksum} but the repository states {remote_checksum}"
return "Path does not exist: " + file
return f"Path does not exist: {file}"
def _local_checksum(self, checksum_alg, file):
if checksum_alg.lower() == 'md5':
@ -607,7 +607,7 @@ class MavenDownloader:
elif checksum_alg.lower() == 'sha1':
hash = hashlib.sha1()
else:
raise ValueError("Unknown checksum_alg %s" % checksum_alg)
raise ValueError(f"Unknown checksum_alg {checksum_alg}")
with io.open(file, 'rb') as f:
for chunk in iter(lambda: f.read(8192), b''):
hash.update(chunk)
@ -660,7 +660,7 @@ def main():
try:
parsed_url = urlparse(repository_url)
except AttributeError as e:
module.fail_json(msg='url parsing went wrong %s' % e)
module.fail_json(msg=f'url parsing went wrong {e}')
local = parsed_url.scheme == "file"
@ -717,12 +717,7 @@ def main():
elif version_by_spec:
version_part = downloader.find_version_by_spec(artifact)
filename = "{artifact_id}{version_part}{classifier}.{extension}".format(
artifact_id=artifact_id,
version_part="-{0}".format(version_part) if keep_name else "",
classifier="-{0}".format(classifier) if classifier else "",
extension=extension
)
filename = f"{artifact_id}{(f'-{version_part}' if keep_name else '')}{(f'-{classifier}' if classifier else '')}.{extension}"
dest = posixpath.join(dest, filename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
@ -736,7 +731,7 @@ def main():
if download_error is None:
changed = True
else:
module.fail_json(msg="Cannot retrieve the artifact to destination: " + download_error)
module.fail_json(msg=f"Cannot retrieve the artifact to destination: {download_error}")
except ValueError as e:
module.fail_json(msg=e.args[0])

View file

@ -129,9 +129,9 @@ def get_facts(args=None):
retvals['failed'] = has_failed
retvals['msg'] = msg
if response.status_code is not None:
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
retvals['stderr'] = f"API returned an error: {response.status_code}"
else:
retvals['stderr'] = "{0}" . format(response.stderr)
retvals['stderr'] = f"{response.stderr}"
return retvals
# we don't want to return the same thing twice

View file

@ -260,9 +260,9 @@ def get_facts(args=None):
retvals['failed'] = has_failed
retvals['msg'] = msg
if response.status_code is not None:
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
retvals['stderr'] = f"API returned an error: {response.status_code}"
else:
retvals['stderr'] = "{0}" . format(response.stderr)
retvals['stderr'] = f"{response.stderr}"
return retvals
# we don't want to return the same thing twice

View file

@ -191,7 +191,7 @@ def create_or_delete_domain(args=None):
retvals['failed'] = has_failed
retvals['msg'] = msg
if response.status_code is not None:
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
retvals['stderr'] = f"API returned an error: {response.status_code}"
else:
retvals['stderr'] = response.stderr
return retvals
@ -203,9 +203,9 @@ def create_or_delete_domain(args=None):
# makes sense in the context of this module.
has_failed = True
if counter == 0:
stderr = "DNS zone '{0}' does not exist, cannot create domain." . format(args['zone'])
stderr = f"DNS zone '{args['zone']}' does not exist, cannot create domain."
elif counter > 1:
stderr = "{0} matches multiple zones, cannot create domain." . format(args['zone'])
stderr = f"{args['zone']} matches multiple zones, cannot create domain."
retvals['failed'] = has_failed
retvals['msg'] = stderr

View file

@ -309,7 +309,7 @@ def create_or_delete(args=None):
retvals['failed'] = _has_failed
retvals['msg'] = msg
if response.status_code is not None:
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
retvals['stderr'] = f"API returned an error: {response.status_code}"
else:
retvals['stderr'] = response.stderr
return retvals
@ -319,9 +319,9 @@ def create_or_delete(args=None):
if not zone_exists:
has_failed = True
if counter == 0:
stderr = "DNS zone {0} does not exist." . format(args['zone'])
stderr = f"DNS zone {args['zone']} does not exist."
elif counter > 1:
stderr = "{0} matches multiple zones." . format(args['zone'])
stderr = f"{args['zone']} matches multiple zones."
retvals['failed'] = has_failed
retvals['msg'] = stderr
retvals['stderr'] = stderr

View file

@ -133,17 +133,17 @@ class MkSysB(ModuleHelper):
extended_attrs=cmd_runner_fmt.as_bool("-a"),
backup_crypt_files=cmd_runner_fmt.as_bool_not("-Z"),
backup_dmapi_fs=cmd_runner_fmt.as_bool("-A"),
combined_path=cmd_runner_fmt.as_func(cmd_runner_fmt.unpack_args(lambda p, n: ["%s/%s" % (p, n)])),
combined_path=cmd_runner_fmt.as_func(cmd_runner_fmt.unpack_args(lambda p, n: [f"{p}/{n}"])),
)
def __init_module__(self):
if not os.path.isdir(self.vars.storage_path):
self.do_raise("Storage path %s is not valid." % self.vars.storage_path)
self.do_raise(f"Storage path {self.vars.storage_path} is not valid.")
def __run__(self):
def process(rc, out, err):
if rc != 0:
self.do_raise("mksysb failed: {0}".format(out))
self.do_raise(f"mksysb failed: {out}")
runner = CmdRunner(
self.module,

View file

@ -107,9 +107,9 @@ class Modprobe(object):
self.changed = False
self.re_find_module = re.compile(r'^ *{0} *(?:[#;].*)?\n?\Z'.format(self.name))
self.re_find_params = re.compile(r'^options {0} \w+=\S+ *(?:[#;].*)?\n?\Z'.format(self.name))
self.re_get_params_and_values = re.compile(r'^options {0} (\w+=\S+) *(?:[#;].*)?\n?\Z'.format(self.name))
self.re_find_module = re.compile(rf'^ *{self.name} *(?:[#;].*)?\n?\Z')
self.re_find_params = re.compile(rf'^options {self.name} \w+=\S+ *(?:[#;].*)?\n?\Z')
self.re_get_params_and_values = re.compile(rf'^options {self.name} (\w+=\S+) *(?:[#;].*)?\n?\Z')
def load_module(self):
command = [self.modprobe_bin]
@ -162,20 +162,19 @@ class Modprobe(object):
def create_module_file(self):
file_path = os.path.join(MODULES_LOAD_LOCATION,
self.name + '.conf')
f"{self.name}.conf")
if not self.check_mode:
with open(file_path, 'w') as file:
file.write(self.name + '\n')
file.write(f"{self.name}\n")
@property
def module_options_file_content(self):
file_content = ['options {0} {1}'.format(self.name, param)
for param in self.params.split()]
return '\n'.join(file_content) + '\n'
file_content = '\n'.join([f'options {self.name} {param}' for param in self.params.split()])
return f"{file_content}\n"
def create_module_options_file(self):
new_file_path = os.path.join(PARAMETERS_FILES_LOCATION,
self.name + '.conf')
f"{self.name}.conf")
if not self.check_mode:
with open(new_file_path, 'w') as file:
file.write(self.module_options_file_content)
@ -189,7 +188,7 @@ class Modprobe(object):
content_changed = False
for index, line in enumerate(file_content):
if self.re_find_params.match(line):
file_content[index] = '#' + line
file_content[index] = f"#{line}"
content_changed = True
if not self.check_mode and content_changed:
@ -205,7 +204,7 @@ class Modprobe(object):
content_changed = False
for index, line in enumerate(file_content):
if self.re_find_module.match(line):
file_content[index] = '#' + line
file_content[index] = f"#{line}"
content_changed = True
if not self.check_mode and content_changed:
@ -252,14 +251,14 @@ class Modprobe(object):
is_loaded = False
try:
with open('/proc/modules') as modules:
module_name = self.name.replace('-', '_') + ' '
module_name = f"{self.name.replace('-', '_')} "
for line in modules:
if line.startswith(module_name):
is_loaded = True
break
if not is_loaded:
module_file = '/' + self.name + '.ko'
module_file = f"/{self.name}.ko"
builtin_path = os.path.join('/lib/modules/', RELEASE_VER, 'modules.builtin')
with open(builtin_path) as builtins:
for line in builtins:

View file

@ -87,12 +87,12 @@ class StatusValue(namedtuple("Status", "value, is_pending")):
return StatusValue(self.value, True)
def __getattr__(self, item):
if item in ('is_%s' % status for status in self.ALL_STATUS):
if item in (f'is_{status}' for status in self.ALL_STATUS):
return self.value == getattr(self, item[3:].upper())
raise AttributeError(item)
def __str__(self):
return "%s%s" % (self.value, " (pending)" if self.is_pending else "")
return f"{self.value}{' (pending)' if self.is_pending else ''}"
class Status(object):
@ -157,7 +157,7 @@ class Monit(object):
def _parse_status(self, output, err):
escaped_monit_services = '|'.join([re.escape(x) for x in MONIT_SERVICES])
pattern = "(%s) '%s'" % (escaped_monit_services, re.escape(self.process_name))
pattern = f"({escaped_monit_services}) '{re.escape(self.process_name)}'"
if not re.search(pattern, output, re.IGNORECASE):
return Status.MISSING
@ -173,7 +173,7 @@ class Monit(object):
try:
return getattr(Status, status_val)
except AttributeError:
self.module.warn("Unknown monit status '%s', treating as execution failed" % status_val)
self.module.warn(f"Unknown monit status '{status_val}', treating as execution failed")
return Status.EXECUTION_FAILED
else:
status_val, substatus = status_val.split(' - ')
@ -190,7 +190,7 @@ class Monit(object):
def is_process_present(self):
command = [self.monit_bin_path, 'summary'] + self.command_args
rc, out, err = self.module.run_command(command, check_rc=True)
return bool(re.findall(r'\b%s\b' % self.process_name, out))
return bool(re.findall(rf'\b{self.process_name}\b', out))
def is_process_running(self):
return self.get_status().is_ok
@ -262,7 +262,7 @@ class Monit(object):
status_match = not status_match
if status_match:
self.exit_success(state=state)
self.exit_fail('%s process not %s' % (self.process_name, state), status)
self.exit_fail(f'{self.process_name} process not {state}', status)
def stop(self):
self.change_state('stopped', Status.NOT_MONITORED)
@ -306,7 +306,7 @@ def main():
present = monit.is_process_present()
if not present and not state == 'present':
module.fail_json(msg='%s process not presently configured with monit' % name, name=name)
module.fail_json(msg=f'{name} process not presently configured with monit', name=name)
if state == 'present':
if present:

View file

@ -134,7 +134,6 @@ except ImportError:
HAS_PAHOMQTT = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
# ===========================================
@ -191,7 +190,7 @@ def main():
tls_version = module.params.get("tls_version", None)
if client_id is None:
client_id = "%s_%s" % (socket.getfqdn(), os.getpid())
client_id = f"{socket.getfqdn()}_{os.getpid()}"
if payload and payload == 'None':
payload = None
@ -235,7 +234,7 @@ def main():
)
except Exception as e:
module.fail_json(
msg="unable to publish to MQTT broker %s" % to_native(e),
msg=f"unable to publish to MQTT broker {e}",
exception=traceback.format_exc()
)

View file

@ -116,29 +116,29 @@ def db_exists(conn, cursor, db):
def db_create(conn, cursor, db):
cursor.execute("CREATE DATABASE [%s]" % db)
cursor.execute(f"CREATE DATABASE [{db}]")
return db_exists(conn, cursor, db)
def db_delete(conn, cursor, db):
try:
cursor.execute("ALTER DATABASE [%s] SET single_user WITH ROLLBACK IMMEDIATE" % db)
cursor.execute(f"ALTER DATABASE [{db}] SET single_user WITH ROLLBACK IMMEDIATE")
except Exception:
pass
cursor.execute("DROP DATABASE [%s]" % db)
cursor.execute(f"DROP DATABASE [{db}]")
return not db_exists(conn, cursor, db)
def db_import(conn, cursor, module, db, target):
if os.path.isfile(target):
with open(target, 'r') as backup:
sqlQuery = "USE [%s]\n" % db
sqlQuery = f"USE [{db}]\n"
for line in backup:
if line is None:
break
elif line.startswith('GO'):
cursor.execute(sqlQuery)
sqlQuery = "USE [%s]\n" % db
sqlQuery = f"USE [{db}]\n"
else:
sqlQuery += line
cursor.execute(sqlQuery)
@ -178,7 +178,7 @@ def main():
login_querystring = login_host
if login_port != "1433":
login_querystring = "%s:%s" % (login_host, login_port)
login_querystring = f"{login_host}:{login_port}"
if login_user != "" and login_password == "":
module.fail_json(msg="when supplying login_user arguments login_password must be provided")
@ -189,7 +189,7 @@ def main():
except Exception as e:
if "Unknown database" in str(e):
errno, errstr = e.args
module.fail_json(msg="ERROR: %s %s" % (errno, errstr))
module.fail_json(msg=f"ERROR: {errno} {errstr}")
else:
module.fail_json(msg="unable to connect, check login_user and login_password are correct, or alternatively check your "
"@sysconfdir@/freetds.conf / ${HOME}/.freetds.conf")
@ -202,7 +202,7 @@ def main():
try:
changed = db_delete(conn, cursor, db)
except Exception as e:
module.fail_json(msg="error deleting database: " + str(e))
module.fail_json(msg=f"error deleting database: {e}")
elif state == "import":
conn.autocommit(autocommit)
rc, stdout, stderr = db_import(conn, cursor, module, db, target)
@ -216,12 +216,12 @@ def main():
try:
changed = db_create(conn, cursor, db)
except Exception as e:
module.fail_json(msg="error creating database: " + str(e))
module.fail_json(msg=f"error creating database: {e}")
elif state == "import":
try:
changed = db_create(conn, cursor, db)
except Exception as e:
module.fail_json(msg="error creating database: " + str(e))
module.fail_json(msg=f"error creating database: {e}")
conn.autocommit(autocommit)
rc, stdout, stderr = db_import(conn, cursor, module, db, target)

View file

@ -317,7 +317,7 @@ def run_module():
login_querystring = login_host
if login_port != 1433:
login_querystring = "%s:%s" % (login_host, login_port)
login_querystring = f"{login_host}:{login_port}"
if login_user is not None and login_password is None:
module.fail_json(
@ -330,7 +330,7 @@ def run_module():
except Exception as e:
if "Unknown database" in str(e):
errno, errstr = e.args
module.fail_json(msg="ERROR: %s %s" % (errno, errstr))
module.fail_json(msg=f"ERROR: {errno} {errstr}")
else:
module.fail_json(msg="unable to connect, check login_user and login_password are correct, or alternatively check your "
"@sysconfdir@/freetds.conf / ${HOME}/.freetds.conf")
@ -388,7 +388,7 @@ def run_module():
# Rollback transaction before failing the module in case of error
if transaction:
conn.rollback()
error_msg = '%s: %s' % (type(e).__name__, str(e))
error_msg = f'{type(e).__name__}: {e}'
module.fail_json(msg="query failed", query=query, error=error_msg, **result)
# Commit transaction before exiting the module in case of no error