- 新增图像生成接口,支持试用、积分和自定义API Key模式 - 实现生成图片结果异步上传至MinIO存储,带重试机制 - 优化积分预扣除和异常退还逻辑,保障用户积分准确 - 添加获取生成历史记录接口,支持时间范围和分页 - 提供本地字典配置接口,支持模型、比例、提示模板和尺寸 - 实现图片批量上传接口,支持S3兼容对象存储 feat(admin): 增加管理员角色管理与权限分配接口 - 实现角色列表查询、角色创建、更新及删除功能 - 增加权限列表查询接口 - 实现用户角色分配接口,便于统一管理用户权限 - 增加系统字典增删查改接口,支持分类过滤和排序 - 权限控制全面覆盖管理接口,保证安全访问 feat(auth): 完善用户登录注册及权限相关接口与页面 - 实现手机号验证码发送及校验功能,保障注册安全 - 支持手机号注册、登录及退出接口,集成日志记录 - 增加修改密码功能,验证原密码后更新 - 提供动态导航菜单接口,基于权限展示不同菜单 - 实现管理界面路由及日志、角色、字典管理页面访问权限控制 - 添加系统日志查询接口,支持关键词和等级筛选 feat(app): 初始化Flask应用并配置蓝图与数据库 - 创建应用程序工厂,加载配置,初始化数据库和Redis客户端 - 注册认证、API及管理员蓝图,整合路由 - 根路由渲染主页模板 - 应用上下文中自动创建数据库表,保证运行环境准备完毕 feat(database): 提供数据库创建与迁移支持脚本 - 新增数据库创建脚本,支持自动检测是否已存在 - 添加数据库表初始化脚本,支持创建和删除所有表 - 实现RBAC权限初始化,包含基础权限和角色创建 - 新增字段手动修复脚本,添加用户API Key和积分字段 - 强制迁移脚本支持清理连接和修复表结构,初始化默认数据及角色分配 feat(config): 新增系统配置参数 - 配置数据库、Redis、Session和MinIO相关参数 - 添加AI接口地址及试用Key配置 - 集成阿里云短信服务配置及开发模式相关参数 feat(extensions): 初始化数据库、Redis和MinIO客户端 - 创建全局SQLAlchemy数据库实例和Redis客户端 - 配置基于boto3的MinIO兼容S3客户端 chore(logs): 添加示例系统日志文件 - 记录用户请求、验证码发送成功与失败的日志信息
217 lines
7.7 KiB
Python
217 lines
7.7 KiB
Python
# Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/
|
|
# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
|
# may not use this file except in compliance with the License. A copy of
|
|
# the License is located at
|
|
#
|
|
# http://aws.amazon.com/apache2.0/
|
|
#
|
|
# or in the "license" file accompanying this file. This file is
|
|
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
|
# ANY KIND, either express or implied. See the License for the specific
|
|
# language governing permissions and limitations under the License.
|
|
|
|
import logging
|
|
from io import IOBase
|
|
|
|
from urllib3.exceptions import ProtocolError as URLLib3ProtocolError
|
|
from urllib3.exceptions import ReadTimeoutError as URLLib3ReadTimeoutError
|
|
|
|
from botocore import (
|
|
ScalarTypes, # noqa: F401
|
|
parsers,
|
|
)
|
|
from botocore.compat import (
|
|
XMLParseError, # noqa: F401
|
|
set_socket_timeout,
|
|
)
|
|
from botocore.exceptions import (
|
|
IncompleteReadError,
|
|
ReadTimeoutError,
|
|
ResponseStreamingError,
|
|
)
|
|
from botocore.hooks import first_non_none_response # noqa
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
class StreamingBody(IOBase):
|
|
"""Wrapper class for an http response body.
|
|
|
|
This provides a few additional conveniences that do not exist
|
|
in the urllib3 model:
|
|
|
|
* Set the timeout on the socket (i.e read() timeouts)
|
|
* Auto validation of content length, if the amount of bytes
|
|
we read does not match the content length, an exception
|
|
is raised.
|
|
|
|
"""
|
|
|
|
_DEFAULT_CHUNK_SIZE = 1024
|
|
|
|
def __init__(self, raw_stream, content_length):
|
|
self._raw_stream = raw_stream
|
|
self._content_length = content_length
|
|
self._amount_read = 0
|
|
|
|
def __del__(self):
|
|
# Extending destructor in order to preserve the underlying raw_stream.
|
|
# The ability to add custom cleanup logic introduced in Python3.4+.
|
|
# https://www.python.org/dev/peps/pep-0442/
|
|
pass
|
|
|
|
def set_socket_timeout(self, timeout):
|
|
"""Set the timeout seconds on the socket."""
|
|
# The problem we're trying to solve is to prevent .read() calls from
|
|
# hanging. This can happen in rare cases. What we'd like to ideally
|
|
# do is set a timeout on the .read() call so that callers can retry
|
|
# the request.
|
|
# Unfortunately, this isn't currently possible in requests.
|
|
# See: https://github.com/kennethreitz/requests/issues/1803
|
|
# So what we're going to do is reach into the guts of the stream and
|
|
# grab the socket object, which we can set the timeout on. We're
|
|
# putting in a check here so in case this interface goes away, we'll
|
|
# know.
|
|
try:
|
|
set_socket_timeout(self._raw_stream, timeout)
|
|
except AttributeError:
|
|
logger.exception(
|
|
"Cannot access the socket object of a streaming response. "
|
|
"It's possible the interface has changed."
|
|
)
|
|
raise
|
|
|
|
def readable(self):
|
|
try:
|
|
return self._raw_stream.readable()
|
|
except AttributeError:
|
|
return False
|
|
|
|
def read(self, amt=None):
|
|
"""Read at most amt bytes from the stream.
|
|
|
|
If the amt argument is omitted, read all data.
|
|
"""
|
|
try:
|
|
chunk = self._raw_stream.read(amt)
|
|
except URLLib3ReadTimeoutError as e:
|
|
# TODO: the url will be None as urllib3 isn't setting it yet
|
|
raise ReadTimeoutError(endpoint_url=e.url, error=e)
|
|
except URLLib3ProtocolError as e:
|
|
raise ResponseStreamingError(error=e)
|
|
self._amount_read += len(chunk)
|
|
if amt is None or (not chunk and amt > 0):
|
|
# If the server sends empty contents or
|
|
# we ask to read all of the contents, then we know
|
|
# we need to verify the content length.
|
|
self._verify_content_length()
|
|
return chunk
|
|
|
|
def readinto(self, b):
|
|
"""Read bytes into a pre-allocated, writable bytes-like object b, and return the number of bytes read."""
|
|
try:
|
|
amount_read = self._raw_stream.readinto(b)
|
|
except URLLib3ReadTimeoutError as e:
|
|
# TODO: the url will be None as urllib3 isn't setting it yet
|
|
raise ReadTimeoutError(endpoint_url=e.url, error=e)
|
|
except URLLib3ProtocolError as e:
|
|
raise ResponseStreamingError(error=e)
|
|
self._amount_read += amount_read
|
|
if amount_read == 0 and len(b) > 0:
|
|
# If the server sends empty contents then we know we need to verify
|
|
# the content length.
|
|
self._verify_content_length()
|
|
return amount_read
|
|
|
|
def readlines(self):
|
|
return self._raw_stream.readlines()
|
|
|
|
def __iter__(self):
|
|
"""Return an iterator to yield 1k chunks from the raw stream."""
|
|
return self.iter_chunks(self._DEFAULT_CHUNK_SIZE)
|
|
|
|
def __next__(self):
|
|
"""Return the next 1k chunk from the raw stream."""
|
|
current_chunk = self.read(self._DEFAULT_CHUNK_SIZE)
|
|
if current_chunk:
|
|
return current_chunk
|
|
raise StopIteration()
|
|
|
|
def __enter__(self):
|
|
return self._raw_stream
|
|
|
|
def __exit__(self, type, value, traceback):
|
|
self._raw_stream.close()
|
|
|
|
next = __next__
|
|
|
|
def iter_lines(self, chunk_size=_DEFAULT_CHUNK_SIZE, keepends=False):
|
|
"""Return an iterator to yield lines from the raw stream.
|
|
|
|
This is achieved by reading chunk of bytes (of size chunk_size) at a
|
|
time from the raw stream, and then yielding lines from there.
|
|
"""
|
|
pending = b''
|
|
for chunk in self.iter_chunks(chunk_size):
|
|
lines = (pending + chunk).splitlines(True)
|
|
for line in lines[:-1]:
|
|
yield line.splitlines(keepends)[0]
|
|
pending = lines[-1]
|
|
if pending:
|
|
yield pending.splitlines(keepends)[0]
|
|
|
|
def iter_chunks(self, chunk_size=_DEFAULT_CHUNK_SIZE):
|
|
"""Return an iterator to yield chunks of chunk_size bytes from the raw
|
|
stream.
|
|
"""
|
|
while True:
|
|
current_chunk = self.read(chunk_size)
|
|
if current_chunk == b"":
|
|
break
|
|
yield current_chunk
|
|
|
|
def _verify_content_length(self):
|
|
# See: https://github.com/kennethreitz/requests/issues/1855
|
|
# Basically, our http library doesn't do this for us, so we have
|
|
# to do this ourself.
|
|
if self._content_length is not None and self._amount_read != int(
|
|
self._content_length
|
|
):
|
|
raise IncompleteReadError(
|
|
actual_bytes=self._amount_read,
|
|
expected_bytes=int(self._content_length),
|
|
)
|
|
|
|
def tell(self):
|
|
return self._raw_stream.tell()
|
|
|
|
def close(self):
|
|
"""Close the underlying http response stream."""
|
|
self._raw_stream.close()
|
|
|
|
|
|
def get_response(operation_model, http_response):
|
|
protocol = operation_model.service_model.resolved_protocol
|
|
response_dict = {
|
|
'headers': http_response.headers,
|
|
'status_code': http_response.status_code,
|
|
}
|
|
# TODO: Unfortunately, we have to have error logic here.
|
|
# If it looks like an error, in the streaming response case we
|
|
# need to actually grab the contents.
|
|
if response_dict['status_code'] >= 300:
|
|
response_dict['body'] = http_response.content
|
|
elif operation_model.has_streaming_output:
|
|
response_dict['body'] = StreamingBody(
|
|
http_response.raw, response_dict['headers'].get('content-length')
|
|
)
|
|
else:
|
|
response_dict['body'] = http_response.content
|
|
|
|
parser = parsers.create_parser(protocol)
|
|
return http_response, parser.parse(
|
|
response_dict, operation_model.output_shape
|
|
)
|