Compare commits
18 Commits
fa1a7a057e
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0db8381ba0 | ||
|
|
0c300321d7 | ||
|
|
869f912886 | ||
|
|
766a43f2e4 | ||
|
|
02a9734961 | ||
|
|
c1af8f1b61 | ||
|
|
452c49335e | ||
|
|
76b490a61f | ||
|
|
761f948a62 | ||
|
|
3038a2978b | ||
|
|
c09970cc54 | ||
|
|
8e226ad4a3 | ||
|
|
199bc29115 | ||
|
|
b1a9204e22 | ||
|
|
8762db2c0e | ||
|
|
c4bb3525d3 | ||
|
|
2d3345bb6d | ||
|
|
ecaca02400 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,3 +1,6 @@
|
||||
update_data
|
||||
log
|
||||
|
||||
# Go build artifacts
|
||||
bin/
|
||||
*.exe
|
||||
|
||||
@@ -3,11 +3,11 @@
|
||||
## Project Structure & Module Organization
|
||||
- `cmd/server/main.go` is the Fiber entrypoint that wires config, routes, and startup logging.
|
||||
- `internal/geo` owns GeoLite2 lookups, IP validation, and response shaping.
|
||||
- `docker-compose.yml` defines the container entry; `Dockerfile` builds a static binary. `GeoLite2-City.mmdb` sits at the repo root and is mounted to `/data/GeoLite2-City.mmdb`.
|
||||
- `docker-compose.yml` defines the container entry; `Dockerfile` builds a static binary. `GeoLite2-City.mmdb` sits at the repo root and is mounted to `/initial_data/GeoLite2-City.mmdb`.
|
||||
- Keep `cmd/server` thin; place new logic in `internal/<domain>` with clear boundaries.
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
- `PORT=8080 GEOIP_DB_PATH=./GeoLite2-City.mmdb go run ./cmd/server` runs the API locally without Docker.
|
||||
- `SERVICE_PORT=8080 GEOIP_DB_PATH=./GeoLite2-City.mmdb go run ./cmd/server` runs the API locally without Docker.
|
||||
- `docker compose up --build` builds and starts the containerized service (mounts the local database).
|
||||
- `curl "http://localhost:8080/lookup?ip=1.1.1.1"` exercises the lookup endpoint; omit `ip` to use the caller’s address.
|
||||
- `go build ./...` validates compilation before pushing changes.
|
||||
|
||||
28
Dockerfile
28
Dockerfile
@@ -8,18 +8,36 @@ COPY go.mod ./
|
||||
RUN go mod download
|
||||
|
||||
COPY . .
|
||||
RUN CGO_ENABLED=0 go build -o /bin/geoip ./cmd/server
|
||||
RUN CGO_ENABLED=0 go build -o /bin/geoip ./cmd/server && \
|
||||
CGO_ENABLED=0 go build -o /bin/geoip-loader ./cmd/loader && \
|
||||
CGO_ENABLED=0 go build -o /bin/user-program-import ./cmd/user_program_import && \
|
||||
CGO_ENABLED=0 go build -o /bin/user-program-dump ./cmd/user_program_dump && \
|
||||
CGO_ENABLED=0 go build -o /bin/user-program-sync ./cmd/user_program_sync
|
||||
|
||||
FROM debian:trixie-slim
|
||||
|
||||
RUN useradd --create-home --shell /usr/sbin/nologin appuser
|
||||
ARG APP_UID=1000
|
||||
ARG APP_GID=1000
|
||||
|
||||
WORKDIR /app
|
||||
ENV TZ=Asia/Seoul
|
||||
|
||||
RUN groupadd -g ${APP_GID} appuser && \
|
||||
useradd --create-home --shell /usr/sbin/nologin --uid ${APP_UID} --gid ${APP_GID} appuser
|
||||
|
||||
RUN ln -snf /usr/share/zoneinfo/${TZ} /etc/localtime && echo ${TZ} > /etc/timezone
|
||||
|
||||
WORKDIR /
|
||||
|
||||
COPY --from=builder /bin/geoip /usr/local/bin/geoip
|
||||
COPY GeoLite2-City.mmdb /data/GeoLite2-City.mmdb
|
||||
COPY --from=builder /bin/geoip-loader /usr/local/bin/geoip-loader
|
||||
COPY --from=builder /bin/user-program-import /usr/local/bin/user-program-import
|
||||
COPY --from=builder /bin/user-program-dump /usr/local/bin/user-program-dump
|
||||
COPY --from=builder /bin/user-program-sync /usr/local/bin/user-program-sync
|
||||
COPY initial_data /initial_data
|
||||
RUN mkdir -p /update_data /log && \
|
||||
chown -R ${APP_UID}:${APP_GID} /initial_data /update_data /log
|
||||
|
||||
ENV GEOIP_DB_PATH=/data/GeoLite2-City.mmdb
|
||||
ENV GEOIP_DB_PATH=/initial_data/GeoLite2-City.mmdb
|
||||
USER appuser
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
38
README.md
38
README.md
@@ -1,6 +1,6 @@
|
||||
# GeoIP REST (Go Fiber)
|
||||
|
||||
간단한 GeoIP 조회 API입니다. `GeoLite2-City.mmdb`를 사용해 IP를 나라/지역/도시/위도/경도로 매핑합니다.
|
||||
간단한 GeoIP 조회 API입니다. 기본은 `GeoLite2-City.mmdb`를 사용해 IP를 나라/지역/도시/위도/경도로 매핑하며, 선택적으로 PostgreSQL + `maxminddb_fdw`로 가져온 데이터를 조회할 수 있습니다. 초기 적재 후에는 DB만으로 조회가 가능하도록 읽기 전용 테이블과 함수가 생성됩니다.
|
||||
|
||||
## 요구 사항
|
||||
- Go 1.25+
|
||||
@@ -14,16 +14,31 @@ go mod tidy # 필요한 경우 go.sum 생성
|
||||
PORT=8080 GEOIP_DB_PATH=./GeoLite2-City.mmdb go run ./cmd/server
|
||||
```
|
||||
|
||||
### Docker Compose
|
||||
### Docker 최초 실행시
|
||||
```bash
|
||||
docker network create --driver bridge --attachable geo-ip
|
||||
```
|
||||
|
||||
### Docker Compose (PostgreSQL + FDW + API)
|
||||
```bash
|
||||
docker compose up --build
|
||||
```
|
||||
- `GeoLite2-City.mmdb`가 컨테이너에 read-only로 마운트됩니다.
|
||||
- 기본 포트: `8080`.
|
||||
- 서비스
|
||||
- `postgres` (5432): `Dockerfile.postgres`로 `maxminddb_fdw`를 빌드하여 확장 설치 후 `GeoLite2-City.mmdb`를 FDW로 읽고, 로컬 테이블로 적재합니다. 초기 적재 완료 후 mmdb 없이도 DB에서 조회가 가능합니다.
|
||||
- `api` (8080): 기본적으로 Postgres 백엔드(`GEOIP_BACKEND=postgres`)를 사용해 조회합니다.
|
||||
- 볼륨
|
||||
- `./GeoLite2-City.mmdb:/initial_data/GeoLite2-City.mmdb:ro` (Postgres 초기 적재용)
|
||||
- `pgdata` (DB 데이터 지속)
|
||||
|
||||
## 환경 변수
|
||||
- 공통
|
||||
- `PORT` (기본 `8080`): 서버 리스닝 포트
|
||||
- `GEOIP_DB_PATH` (기본 `/data/GeoLite2-City.mmdb`): GeoIP 데이터베이스 경로
|
||||
- `GEOIP_BACKEND` (`mmdb`|`postgres`, 기본 `mmdb`)
|
||||
- MMDB 모드
|
||||
- `GEOIP_DB_PATH` (기본 `/initial_data/GeoLite2-City.mmdb`): GeoIP 데이터베이스 경로
|
||||
- Postgres 모드
|
||||
- `DATABASE_URL`: 예) `postgres://geoip_readonly:geoip_readonly@postgres:5432/geoip?sslmode=disable`
|
||||
- `GEOIP_LOOKUP_QUERY` (선택): 기본은 `geoip.lookup_city($1)` 사용
|
||||
|
||||
## 사용법
|
||||
- 헬스체크: `GET /health`
|
||||
@@ -49,6 +64,17 @@ curl "http://localhost:8080/lookup?ip=1.1.1.1"
|
||||
```
|
||||
|
||||
## 개발 참고
|
||||
- 주요 코드: `cmd/server/main.go`, `internal/geo/resolver.go`
|
||||
- 주요 코드: `cmd/server/main.go`, `internal/geo` (MMDB/Postgres resolver)
|
||||
- 테스트 실행: `go test ./...`
|
||||
- Postgres 통합 테스트 실행 시 `GEOIP_TEST_DATABASE_URL`을 설정하면 DB 백엔드 조회 테스트가 수행됩니다(미설정 시 skip).
|
||||
- 컨테이너 빌드: `docker build -t geoip:local .`
|
||||
|
||||
## Postgres/FDW 쿼리 예시
|
||||
- 단건 조회 함수 (CIDR 매칭): `SELECT * FROM geoip.lookup_city('1.1.1.1');`
|
||||
- 원시 테이블 조회: `SELECT * FROM geoip.city_blocks LIMIT 5;`
|
||||
- API는 `lookup_city(inet)`를 사용하여 가장 구체적인 네트워크(prefix) 한 건을 반환합니다.
|
||||
|
||||
## 보안 및 운영 주의사항
|
||||
- GeoLite2 라이선스 준수: `GeoLite2-City.mmdb` 교체 시 기존 파일을 대체하여 재시작하세요.
|
||||
- Postgres 포트(5432) 외부 노출 시 방화벽/보안 그룹으로 제한하고 강한 비밀번호를 사용하세요.
|
||||
- DB는 읽기 전용 계정(`geoip_readonly`)을 기본으로 사용하며, 초기 스키마에서 `default_transaction_read_only`를 강제합니다.
|
||||
|
||||
391
cmd/loader/main.go
Normal file
391
cmd/loader/main.go
Normal file
@@ -0,0 +1,391 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"database/sql"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"github.com/oschwald/maxminddb-golang"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultMMDBPath = "/initial_data/GeoLite2-City.mmdb"
|
||||
defaultSchema = "geoip"
|
||||
defaultLoaderTimeout = 30 * time.Minute
|
||||
)
|
||||
|
||||
type cityRecord struct {
|
||||
City struct {
|
||||
GeoNameID uint `maxminddb:"geoname_id"`
|
||||
Names map[string]string `maxminddb:"names"`
|
||||
} `maxminddb:"city"`
|
||||
Country struct {
|
||||
IsoCode string `maxminddb:"iso_code"`
|
||||
Names map[string]string `maxminddb:"names"`
|
||||
} `maxminddb:"country"`
|
||||
Subdivisions []struct {
|
||||
IsoCode string `maxminddb:"iso_code"`
|
||||
Names map[string]string `maxminddb:"names"`
|
||||
} `maxminddb:"subdivisions"`
|
||||
Location struct {
|
||||
Latitude float64 `maxminddb:"latitude"`
|
||||
Longitude float64 `maxminddb:"longitude"`
|
||||
TimeZone string `maxminddb:"time_zone"`
|
||||
} `maxminddb:"location"`
|
||||
}
|
||||
|
||||
type cityRow struct {
|
||||
network string
|
||||
geonameID int
|
||||
country string
|
||||
countryISO string
|
||||
region string
|
||||
regionISO string
|
||||
city string
|
||||
latitude float64
|
||||
longitude float64
|
||||
timeZone string
|
||||
}
|
||||
|
||||
func main() {
|
||||
dbURL := os.Getenv("DATABASE_URL")
|
||||
if dbURL == "" {
|
||||
log.Fatal("DATABASE_URL is required")
|
||||
}
|
||||
|
||||
mmdbPath := env("GEOIP_DB_PATH", defaultMMDBPath)
|
||||
timeout := envDuration("GEOIP_LOADER_TIMEOUT", defaultLoaderTimeout)
|
||||
skipIfSame := envBool("GEOIP_LOADER_SKIP_IF_SAME_HASH", true)
|
||||
force := envBool("GEOIP_LOADER_FORCE", false)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
log.Printf("starting mmdb load from %s", mmdbPath)
|
||||
|
||||
hash, err := fileSHA256(mmdbPath)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to hash mmdb: %v", err)
|
||||
}
|
||||
|
||||
conn, err := pgx.Connect(ctx, dbURL)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to connect database: %v", err)
|
||||
}
|
||||
defer conn.Close(context.Background())
|
||||
|
||||
if err := ensureSchema(ctx, conn); err != nil {
|
||||
log.Fatalf("failed to ensure schema: %v", err)
|
||||
}
|
||||
|
||||
existingHash, err := currentHash(ctx, conn)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to read metadata: %v", err)
|
||||
}
|
||||
if skipIfSame && !force && existingHash == hash {
|
||||
log.Printf("mmdb hash unchanged (%s), skipping load", hash)
|
||||
return
|
||||
}
|
||||
|
||||
rowSource, err := newNetworkSource(mmdbPath)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to open mmdb: %v", err)
|
||||
}
|
||||
defer rowSource.Close()
|
||||
|
||||
if err := loadNetworks(ctx, conn, rowSource); err != nil {
|
||||
log.Fatalf("failed to load networks: %v", err)
|
||||
}
|
||||
|
||||
if err := upsertHash(ctx, conn, hash); err != nil {
|
||||
log.Fatalf("failed to update metadata: %v", err)
|
||||
}
|
||||
|
||||
log.Printf("loaded mmdb into Postgres (%d rows), hash=%s", rowSource.Rows(), hash)
|
||||
}
|
||||
|
||||
func env(key, fallback string) string {
|
||||
if val := os.Getenv(key); val != "" {
|
||||
return val
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func envBool(key string, fallback bool) bool {
|
||||
val := os.Getenv(key)
|
||||
if val == "" {
|
||||
return fallback
|
||||
}
|
||||
parsed, err := strconv.ParseBool(val)
|
||||
if err != nil {
|
||||
return fallback
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
|
||||
func envDuration(key string, fallback time.Duration) time.Duration {
|
||||
val := os.Getenv(key)
|
||||
if val == "" {
|
||||
return fallback
|
||||
}
|
||||
d, err := time.ParseDuration(val)
|
||||
if err != nil {
|
||||
return fallback
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
func fileSHA256(path string) (string, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
h := sha256.New()
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return hex.EncodeToString(h.Sum(nil)), nil
|
||||
}
|
||||
|
||||
func ensureSchema(ctx context.Context, conn *pgx.Conn) error {
|
||||
ddl := fmt.Sprintf(`
|
||||
CREATE SCHEMA IF NOT EXISTS %s;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS %s.geoip_metadata (
|
||||
key text PRIMARY KEY,
|
||||
value text NOT NULL,
|
||||
updated_at timestamptz NOT NULL DEFAULT now()
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS %s.city_lookup (
|
||||
network cidr PRIMARY KEY,
|
||||
geoname_id integer,
|
||||
country text,
|
||||
country_iso_code text,
|
||||
region text,
|
||||
region_iso_code text,
|
||||
city text,
|
||||
latitude double precision,
|
||||
longitude double precision,
|
||||
time_zone text
|
||||
);
|
||||
`, defaultSchema, defaultSchema, defaultSchema)
|
||||
|
||||
_, err := conn.Exec(ctx, ddl)
|
||||
return err
|
||||
}
|
||||
|
||||
func currentHash(ctx context.Context, conn *pgx.Conn) (string, error) {
|
||||
var hash sql.NullString
|
||||
err := conn.QueryRow(ctx, `SELECT value FROM geoip.geoip_metadata WHERE key = 'mmdb_sha256'`).Scan(&hash)
|
||||
if errors.Is(err, pgx.ErrNoRows) {
|
||||
return "", nil
|
||||
}
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return hash.String, nil
|
||||
}
|
||||
|
||||
func upsertHash(ctx context.Context, conn *pgx.Conn, hash string) error {
|
||||
_, err := conn.Exec(ctx, `
|
||||
INSERT INTO geoip.geoip_metadata(key, value, updated_at)
|
||||
VALUES ('mmdb_sha256', $1, now())
|
||||
ON CONFLICT (key) DO UPDATE
|
||||
SET value = EXCLUDED.value,
|
||||
updated_at = EXCLUDED.updated_at;
|
||||
`, hash)
|
||||
return err
|
||||
}
|
||||
|
||||
type networkSource struct {
|
||||
reader *maxminddb.Reader
|
||||
iter *maxminddb.Networks
|
||||
err error
|
||||
row cityRow
|
||||
count int
|
||||
}
|
||||
|
||||
func newNetworkSource(path string) (*networkSource, error) {
|
||||
reader, err := maxminddb.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &networkSource{
|
||||
reader: reader,
|
||||
iter: reader.Networks(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *networkSource) Close() {
|
||||
if s.reader != nil {
|
||||
_ = s.reader.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *networkSource) Rows() int {
|
||||
return s.count
|
||||
}
|
||||
|
||||
func (s *networkSource) Next() bool {
|
||||
if !s.iter.Next() {
|
||||
s.err = s.iter.Err()
|
||||
return false
|
||||
}
|
||||
|
||||
var rec cityRecord
|
||||
network, err := s.iter.Network(&rec)
|
||||
if err != nil {
|
||||
s.err = err
|
||||
return false
|
||||
}
|
||||
|
||||
s.row = cityRow{
|
||||
network: network.String(),
|
||||
geonameID: int(rec.City.GeoNameID),
|
||||
country: rec.Country.Names["en"],
|
||||
countryISO: rec.Country.IsoCode,
|
||||
region: firstName(rec.Subdivisions),
|
||||
regionISO: firstISO(rec.Subdivisions),
|
||||
city: rec.City.Names["en"],
|
||||
latitude: rec.Location.Latitude,
|
||||
longitude: rec.Location.Longitude,
|
||||
timeZone: rec.Location.TimeZone,
|
||||
}
|
||||
s.count++
|
||||
if s.count%500000 == 0 {
|
||||
log.Printf("loader progress: %d rows processed", s.count)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (s *networkSource) Values() ([]any, error) {
|
||||
return []any{
|
||||
s.row.network,
|
||||
s.row.geonameID,
|
||||
s.row.country,
|
||||
s.row.countryISO,
|
||||
s.row.region,
|
||||
s.row.regionISO,
|
||||
s.row.city,
|
||||
s.row.latitude,
|
||||
s.row.longitude,
|
||||
s.row.timeZone,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *networkSource) Err() error {
|
||||
if s.err != nil {
|
||||
return s.err
|
||||
}
|
||||
return s.iter.Err()
|
||||
}
|
||||
|
||||
func firstName(subdivisions []struct {
|
||||
IsoCode string `maxminddb:"iso_code"`
|
||||
Names map[string]string `maxminddb:"names"`
|
||||
}) string {
|
||||
if len(subdivisions) == 0 {
|
||||
return ""
|
||||
}
|
||||
return subdivisions[0].Names["en"]
|
||||
}
|
||||
|
||||
func firstISO(subdivisions []struct {
|
||||
IsoCode string `maxminddb:"iso_code"`
|
||||
Names map[string]string `maxminddb:"names"`
|
||||
}) string {
|
||||
if len(subdivisions) == 0 {
|
||||
return ""
|
||||
}
|
||||
return subdivisions[0].IsoCode
|
||||
}
|
||||
|
||||
func loadNetworks(ctx context.Context, conn *pgx.Conn, src *networkSource) error {
|
||||
tx, err := conn.Begin(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() {
|
||||
_ = tx.Rollback(ctx)
|
||||
}()
|
||||
|
||||
_, err = tx.Exec(ctx, `DROP TABLE IF EXISTS geoip.city_lookup_new; CREATE TABLE geoip.city_lookup_new (LIKE geoip.city_lookup INCLUDING ALL);`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
columns := []string{
|
||||
"network",
|
||||
"geoname_id",
|
||||
"country",
|
||||
"country_iso_code",
|
||||
"region",
|
||||
"region_iso_code",
|
||||
"city",
|
||||
"latitude",
|
||||
"longitude",
|
||||
"time_zone",
|
||||
}
|
||||
|
||||
log.Printf("loader copy: starting bulk copy")
|
||||
copied, err := tx.CopyFrom(ctx, pgx.Identifier{defaultSchema, "city_lookup_new"}, columns, src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("loader copy: finished bulk copy (rows=%d)", copied)
|
||||
|
||||
if _, err := tx.Exec(ctx, `
|
||||
ALTER TABLE IF EXISTS geoip.city_lookup RENAME TO city_lookup_old;
|
||||
ALTER TABLE geoip.city_lookup_new RENAME TO city_lookup;
|
||||
DROP TABLE IF EXISTS geoip.city_lookup_old;
|
||||
`); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err := tx.Exec(ctx, `
|
||||
CREATE INDEX IF NOT EXISTS city_lookup_network_gist ON geoip.city_lookup USING gist (network inet_ops);
|
||||
CREATE INDEX IF NOT EXISTS city_lookup_geoname_id_idx ON geoip.city_lookup (geoname_id);
|
||||
`); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err := tx.Exec(ctx, `
|
||||
CREATE OR REPLACE FUNCTION geoip.lookup_city(ip inet)
|
||||
RETURNS TABLE (
|
||||
ip inet,
|
||||
country text,
|
||||
region text,
|
||||
city text,
|
||||
latitude double precision,
|
||||
longitude double precision
|
||||
) LANGUAGE sql STABLE AS $$
|
||||
SELECT
|
||||
$1::inet AS ip,
|
||||
c.country,
|
||||
c.region,
|
||||
c.city,
|
||||
c.latitude,
|
||||
c.longitude
|
||||
FROM geoip.city_lookup c
|
||||
WHERE c.network >>= $1
|
||||
ORDER BY masklen(c.network) DESC
|
||||
LIMIT 1;
|
||||
$$;
|
||||
`); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return tx.Commit(ctx)
|
||||
}
|
||||
@@ -1,33 +1,57 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/gofiber/fiber/v2"
|
||||
"github.com/gofiber/fiber/v2/middleware/logger"
|
||||
|
||||
"geoip-rest/internal/geo"
|
||||
"geoip-rest/internal/schedule"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultPort = "8080"
|
||||
defaultDBPath = "/data/GeoLite2-City.mmdb"
|
||||
defaultDBPath = "/initial_data/GeoLite2-City.mmdb"
|
||||
defaultCron = "5 0 * * *" // 매일 00:05 KST
|
||||
defaultJob = "user-program-sync"
|
||||
)
|
||||
|
||||
func main() {
|
||||
backend := geo.Backend(env("GEOIP_BACKEND", string(geo.BackendMMDB)))
|
||||
dbPath := env("GEOIP_DB_PATH", defaultDBPath)
|
||||
dbURL := os.Getenv("DATABASE_URL")
|
||||
lookupQuery := os.Getenv("GEOIP_LOOKUP_QUERY")
|
||||
port := env("PORT", defaultPort)
|
||||
|
||||
resolver, err := geo.NewResolver(dbPath)
|
||||
resolver, err := geo.NewResolver(geo.Config{
|
||||
Backend: backend,
|
||||
MMDBPath: dbPath,
|
||||
DatabaseURL: dbURL,
|
||||
LookupQuery: lookupQuery,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("failed to open GeoIP database: %v", err)
|
||||
log.Fatalf("failed to initialize resolver: %v", err)
|
||||
}
|
||||
defer resolver.Close()
|
||||
|
||||
app := fiber.New(fiber.Config{
|
||||
DisableStartupMessage: true,
|
||||
ReadBufferSize: 16 * 1024, // allow larger request headers (e.g., proxy cookies)
|
||||
})
|
||||
|
||||
app.Use(newFileLogger(env("ACCESS_LOG_PATH", "/log/api-access.log")))
|
||||
|
||||
app.Get("/", func(c *fiber.Ctx) error {
|
||||
return c.JSON(fiber.Map{
|
||||
"service": "geoip-rest",
|
||||
@@ -50,29 +74,205 @@ func main() {
|
||||
|
||||
location, err := resolver.Lookup(ip)
|
||||
if err != nil {
|
||||
if err == geo.ErrInvalidIP {
|
||||
switch {
|
||||
case errors.Is(err, geo.ErrInvalidIP):
|
||||
return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{
|
||||
"error": "invalid ip address",
|
||||
})
|
||||
}
|
||||
|
||||
case errors.Is(err, geo.ErrNotFound):
|
||||
return c.Status(fiber.StatusNotFound).JSON(fiber.Map{
|
||||
"error": "location not found",
|
||||
})
|
||||
default:
|
||||
return c.Status(fiber.StatusInternalServerError).JSON(fiber.Map{
|
||||
"error": "lookup failed",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return c.JSON(location)
|
||||
})
|
||||
|
||||
log.Printf("starting GeoIP API on :%s using %s", port, dbPath)
|
||||
log.Printf("starting GeoIP API on :%s backend=%s", port, backend)
|
||||
switch backend {
|
||||
case geo.BackendPostgres:
|
||||
log.Printf("using postgres DSN %s", sanitizeDBURL(dbURL))
|
||||
default:
|
||||
log.Printf("using mmdb path %s", dbPath)
|
||||
}
|
||||
|
||||
stopScheduler := maybeStartScheduler()
|
||||
defer func() {
|
||||
if stopScheduler != nil {
|
||||
ctx := stopScheduler()
|
||||
<-ctx.Done()
|
||||
}
|
||||
}()
|
||||
|
||||
if err := app.Listen(":" + port); err != nil {
|
||||
log.Fatalf("server stopped: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func newFileLogger(path string) fiber.Handler {
|
||||
if path == "" {
|
||||
return func(c *fiber.Ctx) error { return c.Next() }
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
|
||||
log.Printf("access log disabled (mkdir failed: %v)", err)
|
||||
return func(c *fiber.Ctx) error { return c.Next() }
|
||||
}
|
||||
maxBytes := int64(envInt("ACCESS_LOG_MAX_BYTES", 10*1024*1024))
|
||||
writer, err := newRotatingWriter(path, maxBytes)
|
||||
if err != nil {
|
||||
log.Printf("access log disabled (open failed: %v)", err)
|
||||
return func(c *fiber.Ctx) error { return c.Next() }
|
||||
}
|
||||
|
||||
format := "${time} ip=${ip} real_ip=${header:X-Real-IP} forwarded=${header:X-Forwarded-For} ${method} ${path} ${protocol} ${status} ${latency_human} ua=\"${ua}\" headers=\"${reqHeadersShort}\"\n"
|
||||
cfg := logger.Config{
|
||||
Format: format,
|
||||
TimeFormat: time.RFC3339,
|
||||
TimeZone: "Asia/Seoul",
|
||||
Output: writer,
|
||||
CustomTags: map[string]logger.LogFunc{
|
||||
"reqHeadersShort": func(output logger.Buffer, c *fiber.Ctx, data *logger.Data, param string) (int, error) {
|
||||
const max = 1024
|
||||
h := c.Request().Header.String()
|
||||
if len(h) > max {
|
||||
h = h[:max] + "...(truncated)"
|
||||
}
|
||||
return output.WriteString(strings.TrimSpace(h))
|
||||
},
|
||||
},
|
||||
}
|
||||
return logger.New(cfg)
|
||||
}
|
||||
|
||||
type rotatingWriter struct {
|
||||
mu sync.Mutex
|
||||
path string
|
||||
maxBytes int64
|
||||
file *os.File
|
||||
}
|
||||
|
||||
func newRotatingWriter(path string, maxBytes int64) (*rotatingWriter, error) {
|
||||
f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &rotatingWriter{
|
||||
path: path,
|
||||
maxBytes: maxBytes,
|
||||
file: f,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (w *rotatingWriter) Write(p []byte) (int, error) {
|
||||
w.mu.Lock()
|
||||
defer w.mu.Unlock()
|
||||
|
||||
if err := w.rotateIfNeeded(len(p)); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return w.file.Write(p)
|
||||
}
|
||||
|
||||
func (w *rotatingWriter) rotateIfNeeded(incoming int) error {
|
||||
info, err := w.file.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if info.Size()+int64(incoming) <= w.maxBytes {
|
||||
return nil
|
||||
}
|
||||
_ = w.file.Close()
|
||||
|
||||
ts := time.Now().Format("20060102-150405")
|
||||
rotated := fmt.Sprintf("%s.%s", w.path, ts)
|
||||
if err := os.Rename(w.path, rotated); err != nil {
|
||||
// attempt to reopen original to keep logging
|
||||
w.file, _ = os.OpenFile(w.path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
|
||||
return err
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(w.path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.file = f
|
||||
return nil
|
||||
}
|
||||
|
||||
func env(key, fallback string) string {
|
||||
if val := os.Getenv(key); val != "" {
|
||||
return val
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func envBool(key string, fallback bool) bool {
|
||||
val := os.Getenv(key)
|
||||
if val == "" {
|
||||
return fallback
|
||||
}
|
||||
switch strings.ToLower(val) {
|
||||
case "1", "t", "true", "y", "yes", "on":
|
||||
return true
|
||||
case "0", "f", "false", "n", "no", "off":
|
||||
return false
|
||||
default:
|
||||
return fallback
|
||||
}
|
||||
}
|
||||
|
||||
func envInt(key string, fallback int) int {
|
||||
val := os.Getenv(key)
|
||||
if val == "" {
|
||||
return fallback
|
||||
}
|
||||
parsed, err := strconv.Atoi(val)
|
||||
if err != nil {
|
||||
return fallback
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
|
||||
func sanitizeDBURL(raw string) string {
|
||||
u, err := url.Parse(raw)
|
||||
if err != nil {
|
||||
return "postgres"
|
||||
}
|
||||
return u.Redacted()
|
||||
}
|
||||
|
||||
func maybeStartScheduler() func() context.Context {
|
||||
enabled := envBool("USER_PROGRAM_CRON_ENABLE", false)
|
||||
if !enabled {
|
||||
return nil
|
||||
}
|
||||
cronExpr := defaultCron
|
||||
command := defaultJob
|
||||
|
||||
sched, err := schedule.Start(schedule.Config{
|
||||
CronExpr: cronExpr,
|
||||
Command: command,
|
||||
Logger: log.Default(),
|
||||
})
|
||||
if err != nil {
|
||||
log.Printf("scheduler not started (error=%v)", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
return func() context.Context {
|
||||
ctx := sched.Stop()
|
||||
timer := time.NewTimer(2 * time.Second)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
timer.Stop()
|
||||
return ctx
|
||||
case <-timer.C:
|
||||
return ctx
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
63
cmd/user_program_dump/main.go
Normal file
63
cmd/user_program_dump/main.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"geoip-rest/internal/userprogram"
|
||||
)
|
||||
|
||||
const defaultDumpTimeout = 5 * time.Minute
|
||||
|
||||
func main() {
|
||||
logger := log.New(os.Stdout, "[dump] ", log.LstdFlags)
|
||||
|
||||
mysqlCfg, err := userprogram.NewMySQLConfigFromEnv()
|
||||
if err != nil {
|
||||
log.Fatalf("config error: %v", err)
|
||||
}
|
||||
updateDir := userprogram.DefaultUpdateDir
|
||||
if val := os.Getenv("USER_PROGRAM_UPDATE_DIR"); val != "" {
|
||||
updateDir = val
|
||||
}
|
||||
target, err := userprogram.ParseTargetDate(os.Getenv("USER_PROGRAM_TARGET_DATE"))
|
||||
if err != nil {
|
||||
log.Fatalf("target date error: %v", err)
|
||||
}
|
||||
startID := int64(0)
|
||||
if val := os.Getenv("USER_PROGRAM_START_ID"); val != "" {
|
||||
parsed, parseErr := strconv.ParseInt(val, 10, 64)
|
||||
if parseErr != nil {
|
||||
log.Fatalf("invalid USER_PROGRAM_START_ID: %v", parseErr)
|
||||
}
|
||||
startID = parsed
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultDumpTimeout)
|
||||
defer cancel()
|
||||
|
||||
dumper, err := userprogram.NewDumper(mysqlCfg, updateDir)
|
||||
if err != nil {
|
||||
log.Fatalf("init dumper failed: %v", err)
|
||||
}
|
||||
defer dumper.Close()
|
||||
|
||||
endID, err := dumper.MaxIDUntil(ctx, target)
|
||||
if err != nil {
|
||||
log.Fatalf("determine end id failed: %v", err)
|
||||
}
|
||||
if endID <= startID {
|
||||
logger.Printf("no rows to dump (start_id=%d end_id=%d)", startID, endID)
|
||||
return
|
||||
}
|
||||
|
||||
outPath, err := dumper.DumpRange(ctx, startID, endID, target)
|
||||
if err != nil {
|
||||
log.Fatalf("dump failed: %v", err)
|
||||
}
|
||||
|
||||
logger.Printf("dumped ids (%d, %d] to %s", startID, endID, outPath)
|
||||
}
|
||||
80
cmd/user_program_import/main.go
Normal file
80
cmd/user_program_import/main.go
Normal file
@@ -0,0 +1,80 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
|
||||
"geoip-rest/internal/importer"
|
||||
"geoip-rest/internal/userprogram"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultCSVPath = "/initial_data/user_program_info_init_20251208.csv"
|
||||
defaultUpdateDir = "/update_data"
|
||||
defaultTimeout = 10 * time.Minute
|
||||
defaultSchema = "public"
|
||||
defaultLogDir = "/log"
|
||||
targetTableName = "user_program_info_replica"
|
||||
)
|
||||
|
||||
func main() {
|
||||
dbURL, err := databaseURL()
|
||||
if err != nil {
|
||||
log.Fatalf("database config: %v", err)
|
||||
}
|
||||
|
||||
csvPath := env("USER_PROGRAM_INFO_CSV", defaultCSVPath)
|
||||
updateDir := env("USER_PROGRAM_UPDATE_DIR", defaultUpdateDir)
|
||||
schema := env("USER_PROGRAM_INFO_SCHEMA", env("POSTGRES_SCHEMA", defaultSchema))
|
||||
logDir := env("USER_PROGRAM_IMPORT_LOG_DIR", defaultLogDir)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
conn, err := pgx.Connect(ctx, dbURL)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to connect to database: %v", err)
|
||||
}
|
||||
defer conn.Close(context.Background())
|
||||
|
||||
if err := importer.EnsureUserProgramReplica(ctx, conn, csvPath, schema, logDir); err != nil {
|
||||
log.Fatalf("failed to ensure %s table: %v", targetTableName, err)
|
||||
}
|
||||
|
||||
if err := importer.ImportUserProgramUpdates(ctx, conn, updateDir, schema, logDir); err != nil {
|
||||
log.Fatalf("failed to import updates from %s: %v", updateDir, err)
|
||||
}
|
||||
|
||||
if err := userprogram.SeedIPGeoInfoIfMissing(ctx, conn, schema); err != nil {
|
||||
log.Fatalf("failed to seed ip_geoinfo: %v", err)
|
||||
}
|
||||
|
||||
log.Printf("%s is ready in schema %s using data from %s (updates: %s)", targetTableName, schema, csvPath, updateDir)
|
||||
}
|
||||
|
||||
func env(key, fallback string) string {
|
||||
if val := os.Getenv(key); val != "" {
|
||||
return val
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func databaseURL() (string, error) {
|
||||
if url := os.Getenv("DATABASE_URL"); url != "" {
|
||||
return url, nil
|
||||
}
|
||||
user := os.Getenv("POSTGRES_USER")
|
||||
pass := os.Getenv("POSTGRES_PASSWORD")
|
||||
host := env("POSTGRES_HOST", "localhost")
|
||||
port := env("POSTGRES_PORT", "5432")
|
||||
db := os.Getenv("POSTGRES_DB")
|
||||
if user == "" || pass == "" || db == "" {
|
||||
return "", fmt.Errorf("DATABASE_URL or POSTGRES_{USER,PASSWORD,DB} is required")
|
||||
}
|
||||
return fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable", user, pass, host, port, db), nil
|
||||
}
|
||||
70
cmd/user_program_sync/main.go
Normal file
70
cmd/user_program_sync/main.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"geoip-rest/internal/userprogram"
|
||||
)
|
||||
|
||||
const defaultTimeout = 30 * time.Minute
|
||||
|
||||
func main() {
|
||||
logger := log.New(os.Stdout, "[sync] ", log.LstdFlags)
|
||||
|
||||
dbURL, err := databaseURL()
|
||||
if err != nil {
|
||||
logger.Fatalf("database config: %v", err)
|
||||
}
|
||||
|
||||
mysqlCfg, err := userprogram.NewMySQLConfigFromEnv()
|
||||
if err != nil {
|
||||
logger.Fatalf("mysql config: %v", err)
|
||||
}
|
||||
paths, err := userprogram.NewPathsFromEnv()
|
||||
if err != nil {
|
||||
logger.Fatalf("paths config: %v", err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
if err := userprogram.Sync(ctx, userprogram.SyncConfig{
|
||||
MySQL: mysqlCfg,
|
||||
DatabaseURL: dbURL,
|
||||
Backend: userprogram.BackendFromEnv(),
|
||||
LookupQuery: os.Getenv("GEOIP_LOOKUP_QUERY"),
|
||||
MMDBPath: os.Getenv("GEOIP_DB_PATH"),
|
||||
InitialCSV: paths.InitialCSV,
|
||||
UpdateDir: paths.UpdateDir,
|
||||
LogDir: paths.LogDir,
|
||||
Schema: paths.Schema,
|
||||
Logger: logger,
|
||||
}); err != nil {
|
||||
logger.Fatalf("sync failed: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func databaseURL() (string, error) {
|
||||
if url := os.Getenv("DATABASE_URL"); url != "" {
|
||||
return url, nil
|
||||
}
|
||||
user := os.Getenv("POSTGRES_USER")
|
||||
pass := os.Getenv("POSTGRES_PASSWORD")
|
||||
host := os.Getenv("POSTGRES_HOST")
|
||||
if host == "" {
|
||||
host = "localhost"
|
||||
}
|
||||
port := os.Getenv("POSTGRES_PORT")
|
||||
if port == "" {
|
||||
port = "5432"
|
||||
}
|
||||
db := os.Getenv("POSTGRES_DB")
|
||||
if user == "" || pass == "" || db == "" {
|
||||
return "", fmt.Errorf("DATABASE_URL or POSTGRES_{USER,PASSWORD,DB} is required")
|
||||
}
|
||||
return fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable", user, pass, host, port, db), nil
|
||||
}
|
||||
62
deploy/postgres/init/00_geoip.sql
Normal file
62
deploy/postgres/init/00_geoip.sql
Normal file
@@ -0,0 +1,62 @@
|
||||
SET client_min_messages TO WARNING;
|
||||
|
||||
CREATE SCHEMA IF NOT EXISTS geoip;
|
||||
SET search_path TO geoip, public;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS geoip_metadata (
|
||||
key text PRIMARY KEY,
|
||||
value text NOT NULL,
|
||||
updated_at timestamptz NOT NULL DEFAULT now()
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS city_lookup (
|
||||
network cidr PRIMARY KEY,
|
||||
geoname_id integer,
|
||||
country text,
|
||||
country_iso_code text,
|
||||
region text,
|
||||
region_iso_code text,
|
||||
city text,
|
||||
latitude double precision,
|
||||
longitude double precision,
|
||||
time_zone text
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS city_lookup_network_gist ON city_lookup USING gist (network inet_ops);
|
||||
CREATE INDEX IF NOT EXISTS city_lookup_geoname_id_idx ON city_lookup (geoname_id);
|
||||
|
||||
CREATE OR REPLACE FUNCTION lookup_city(ip inet)
|
||||
RETURNS TABLE (
|
||||
ip inet,
|
||||
country text,
|
||||
region text,
|
||||
city text,
|
||||
latitude double precision,
|
||||
longitude double precision
|
||||
) LANGUAGE sql STABLE AS $$
|
||||
SELECT
|
||||
$1::inet AS ip,
|
||||
c.country,
|
||||
c.region,
|
||||
c.city,
|
||||
c.latitude,
|
||||
c.longitude
|
||||
FROM city_lookup c
|
||||
WHERE c.network >>= $1
|
||||
ORDER BY masklen(c.network) DESC
|
||||
LIMIT 1;
|
||||
$$;
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'geoip_readonly') THEN
|
||||
CREATE ROLE geoip_readonly LOGIN PASSWORD 'geoip_readonly';
|
||||
ALTER ROLE geoip_readonly SET default_transaction_read_only = on;
|
||||
END IF;
|
||||
END$$;
|
||||
|
||||
GRANT USAGE ON SCHEMA geoip TO geoip_readonly;
|
||||
GRANT SELECT ON ALL TABLES IN SCHEMA geoip TO geoip_readonly;
|
||||
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA geoip TO geoip_readonly;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA geoip GRANT SELECT ON TABLES TO geoip_readonly;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA geoip GRANT EXECUTE ON FUNCTIONS TO geoip_readonly;
|
||||
2
deploy/postgres/init/01_tuning.sql
Normal file
2
deploy/postgres/init/01_tuning.sql
Normal file
@@ -0,0 +1,2 @@
|
||||
-- Reduce checkpoint churn during bulk MMDB load
|
||||
ALTER SYSTEM SET max_wal_size = '4GB';
|
||||
@@ -1,11 +1,73 @@
|
||||
services:
|
||||
api:
|
||||
build: .
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- APP_UID=${APP_UID:-1000}
|
||||
- APP_GID=${APP_GID:-1000}
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "8080:8080"
|
||||
- "${SERVICE_PORT:-8080}:8080"
|
||||
environment:
|
||||
- TZ=Asia/Seoul
|
||||
- PORT=8080
|
||||
- GEOIP_DB_PATH=/data/GeoLite2-City.mmdb
|
||||
- GEOI_DB_PATH=${GEOIP_DB_PATH:-/initial_data/GeoLite2-City.mmdb}
|
||||
- GEOIP_BACKEND=${GEOIP_BACKEND:-mmdb}
|
||||
- GEOIP_LOADER_TIMEOUT=${GEOIP_LOADER_TIMEOUT:-30m}
|
||||
- DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST:-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB}?sslmode=disable
|
||||
command: >
|
||||
sh -c '
|
||||
set -e;
|
||||
if [ "${USER_PROGRAM_IMPORT_ON_START:-true}" = "true" ]; then
|
||||
echo "[api] running user-program-import before api start";
|
||||
user-program-import;
|
||||
else
|
||||
echo "[api] skipping user-program-import (USER_PROGRAM_IMPORT_ON_START=${USER_PROGRAM_IMPORT_ON_START})";
|
||||
fi;
|
||||
if [ "${GEOIP_BACKEND}" = "postgres" ]; then
|
||||
echo "[api] running geoip-loader before api start";
|
||||
geoip-loader;
|
||||
else
|
||||
echo "[api] skipping geoip-loader (backend=${GEOIP_BACKEND})";
|
||||
fi;
|
||||
exec geoip
|
||||
'
|
||||
volumes:
|
||||
- ./GeoLite2-City.mmdb:/data/GeoLite2-City.mmdb:ro
|
||||
- ./initial_data:/initial_data:ro
|
||||
- ./update_data:/update_data
|
||||
- ./log:/log
|
||||
networks:
|
||||
- geo-ip
|
||||
|
||||
db:
|
||||
image: postgres:17.7-trixie
|
||||
env_file:
|
||||
- .env
|
||||
environment:
|
||||
- TZ=Asia/Seoul
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
ports:
|
||||
- "${POSTGRES_PORT:-5432}:5432"
|
||||
volumes:
|
||||
- ./deploy/postgres/init:/docker-entrypoint-initdb.d:ro
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 10s
|
||||
networks:
|
||||
- geo-ip
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
|
||||
networks:
|
||||
geo-ip:
|
||||
|
||||
14
go.mod
14
go.mod
@@ -3,21 +3,31 @@ module geoip-rest
|
||||
go 1.25
|
||||
|
||||
require (
|
||||
github.com/go-sql-driver/mysql v1.8.1
|
||||
github.com/gofiber/fiber/v2 v2.52.8
|
||||
github.com/jackc/pgx/v5 v5.7.6
|
||||
github.com/oschwald/geoip2-golang v1.9.0
|
||||
github.com/oschwald/maxminddb-golang v1.11.0
|
||||
github.com/robfig/cron/v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
filippo.io/edwards25519 v1.1.0 // indirect
|
||||
github.com/andybalholm/brotli v1.1.0 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||
github.com/klauspost/compress v1.17.9 // indirect
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.16 // indirect
|
||||
github.com/oschwald/maxminddb-golang v1.11.0 // indirect
|
||||
github.com/rivo/uniseg v0.2.0 // indirect
|
||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||
github.com/valyala/fasthttp v1.51.0 // indirect
|
||||
github.com/valyala/tcplisten v1.0.0 // indirect
|
||||
golang.org/x/sys v0.28.0 // indirect
|
||||
golang.org/x/crypto v0.37.0 // indirect
|
||||
golang.org/x/sync v0.13.0 // indirect
|
||||
golang.org/x/sys v0.32.0 // indirect
|
||||
golang.org/x/text v0.24.0 // indirect
|
||||
)
|
||||
|
||||
30
go.sum
30
go.sum
@@ -1,11 +1,24 @@
|
||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
|
||||
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
|
||||
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
|
||||
github.com/gofiber/fiber/v2 v2.52.8 h1:xl4jJQ0BV5EJTA2aWiKw/VddRpHrKeZLF0QPUxqn0x4=
|
||||
github.com/gofiber/fiber/v2 v2.52.8/go.mod h1:YEcBbO/FB+5M1IZNBP9FO3J9281zgPAreiI1oqg8nDw=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
|
||||
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
|
||||
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
|
||||
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
|
||||
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||
@@ -23,6 +36,11 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||
@@ -31,9 +49,17 @@ github.com/valyala/fasthttp v1.51.0 h1:8b30A5JlZ6C7AS81RsWjYMQmrZG6feChmgAolCl1S
|
||||
github.com/valyala/fasthttp v1.51.0/go.mod h1:oI2XroL+lI7vdXyYoQk03bXBThfFl2cVdIA3Xl7cH8g=
|
||||
github.com/valyala/tcplisten v1.0.0 h1:rBHj/Xf+E1tRGZyWIWwJDiRY0zc1Js+CV5DqwacVSA8=
|
||||
github.com/valyala/tcplisten v1.0.0/go.mod h1:T0xQ8SeCZGxckz9qRXTfG43PvQ/mcWh7FwZEA7Ioqkc=
|
||||
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
|
||||
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
|
||||
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
|
||||
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
|
||||
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
|
||||
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
|
||||
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
|
||||
|
Before Width: | Height: | Size: 60 MiB After Width: | Height: | Size: 60 MiB |
5880
initial_data/ip_geoinfo_seed_20251208.sql
Normal file
5880
initial_data/ip_geoinfo_seed_20251208.sql
Normal file
File diff suppressed because it is too large
Load Diff
653
initial_data/public_ip_list_20251208.csv
Normal file
653
initial_data/public_ip_list_20251208.csv
Normal file
@@ -0,0 +1,653 @@
|
||||
"login_public_ip"
|
||||
"14.48.84.141"
|
||||
"211.234.204.251"
|
||||
"117.111.6.13"
|
||||
"211.198.190.49"
|
||||
"211.235.72.178"
|
||||
"1.235.81.27"
|
||||
"114.200.179.133"
|
||||
"121.184.7.188"
|
||||
"211.234.192.13"
|
||||
"211.235.72.147"
|
||||
"218.158.86.157"
|
||||
"125.133.48.195"
|
||||
"112.161.155.175"
|
||||
"183.99.124.97"
|
||||
"220.66.76.78"
|
||||
"121.191.223.33"
|
||||
"125.138.71.211"
|
||||
"59.11.165.82"
|
||||
"115.178.65.26"
|
||||
"211.235.91.226"
|
||||
"59.10.150.245"
|
||||
"211.234.207.159"
|
||||
"115.21.217.190"
|
||||
"211.234.201.231"
|
||||
"118.47.158.73"
|
||||
"211.227.202.187"
|
||||
"211.107.218.58"
|
||||
"211.216.248.115"
|
||||
"220.66.76.90"
|
||||
"182.229.32.154"
|
||||
"220.66.75.59"
|
||||
"211.234.180.197"
|
||||
"110.13.11.131"
|
||||
"121.191.17.213"
|
||||
"61.255.88.218"
|
||||
"175.223.11.244"
|
||||
"1.239.250.231"
|
||||
"220.66.76.12"
|
||||
"220.77.100.238"
|
||||
"220.82.170.73"
|
||||
"121.169.76.48"
|
||||
"106.101.1.212"
|
||||
"140.174.179.52"
|
||||
"59.14.240.38"
|
||||
"220.66.76.27"
|
||||
"220.86.36.184"
|
||||
"125.142.226.122"
|
||||
"211.230.115.75"
|
||||
"106.246.182.243"
|
||||
"27.171.216.148"
|
||||
"218.158.239.173"
|
||||
"118.33.187.225"
|
||||
"59.16.104.87"
|
||||
"147.46.92.180"
|
||||
"59.16.73.144"
|
||||
"14.40.91.101"
|
||||
"106.101.3.30"
|
||||
"211.253.98.34"
|
||||
"58.232.80.152"
|
||||
"121.184.234.136"
|
||||
"183.100.230.19"
|
||||
"175.204.180.9"
|
||||
"61.78.80.26"
|
||||
"118.36.229.195"
|
||||
"220.66.76.86"
|
||||
"222.120.70.104"
|
||||
"119.196.164.92"
|
||||
"156.59.47.103"
|
||||
"147.46.91.174"
|
||||
"59.151.192.104"
|
||||
"211.234.192.53"
|
||||
"118.42.197.237"
|
||||
"202.150.191.177"
|
||||
"110.70.51.111"
|
||||
"121.139.50.83"
|
||||
"147.46.91.171"
|
||||
"147.46.92.184"
|
||||
"147.46.91.170"
|
||||
"61.73.2.206"
|
||||
"59.31.157.91"
|
||||
"110.8.170.18"
|
||||
"106.244.132.248"
|
||||
"211.192.150.114"
|
||||
"115.22.123.154"
|
||||
"211.234.195.205"
|
||||
"211.234.207.45"
|
||||
"114.108.4.68"
|
||||
"14.56.254.183"
|
||||
"220.66.76.34"
|
||||
"210.121.223.76"
|
||||
"180.80.112.236"
|
||||
"147.46.91.163"
|
||||
"118.216.73.120"
|
||||
"59.29.126.221"
|
||||
"14.7.55.231"
|
||||
"211.244.123.114"
|
||||
"220.121.164.74"
|
||||
"147.46.91.167"
|
||||
"115.138.239.234"
|
||||
"121.157.84.116"
|
||||
"147.46.92.169"
|
||||
"121.135.102.173"
|
||||
"220.90.89.151"
|
||||
"119.204.165.88"
|
||||
"147.46.91.138"
|
||||
"147.46.91.150"
|
||||
"211.235.72.16"
|
||||
"121.153.208.177"
|
||||
"121.132.197.222"
|
||||
"183.99.111.70"
|
||||
"223.39.177.150"
|
||||
"147.46.91.168"
|
||||
"147.46.91.130"
|
||||
"211.234.192.51"
|
||||
"121.164.134.117"
|
||||
"211.226.165.121"
|
||||
"182.208.205.103"
|
||||
"58.224.147.180"
|
||||
"220.66.76.25"
|
||||
"220.125.153.37"
|
||||
"156.59.47.87"
|
||||
"220.66.75.19"
|
||||
"220.66.76.97"
|
||||
"211.235.83.117"
|
||||
"115.95.35.118"
|
||||
"147.46.92.183"
|
||||
"211.234.201.160"
|
||||
"147.46.92.75"
|
||||
"221.153.165.116"
|
||||
"147.46.91.156"
|
||||
"147.46.91.161"
|
||||
"61.85.224.4"
|
||||
"106.101.130.197"
|
||||
"220.74.62.74"
|
||||
"121.180.83.184"
|
||||
"220.77.186.166"
|
||||
"112.166.96.247"
|
||||
"175.210.109.187"
|
||||
"125.129.140.58"
|
||||
"220.66.76.95"
|
||||
"211.234.197.119"
|
||||
"220.66.75.146"
|
||||
"1.235.32.245"
|
||||
"49.142.69.179"
|
||||
"218.146.23.149"
|
||||
"14.34.247.171"
|
||||
"125.135.247.165"
|
||||
"211.234.226.128"
|
||||
"211.234.200.241"
|
||||
"106.101.0.195"
|
||||
"218.153.177.253"
|
||||
"211.235.72.65"
|
||||
"221.154.56.116"
|
||||
"220.66.76.16"
|
||||
"39.127.71.122"
|
||||
"222.105.23.150"
|
||||
"223.39.219.99"
|
||||
"220.90.15.33"
|
||||
"121.149.50.106"
|
||||
"121.64.161.83"
|
||||
"211.235.65.89"
|
||||
"140.174.179.101"
|
||||
"39.7.47.68"
|
||||
"211.218.250.217"
|
||||
"221.138.240.198"
|
||||
"106.101.9.220"
|
||||
"220.66.76.29"
|
||||
"211.198.63.117"
|
||||
"147.46.91.145"
|
||||
"211.221.180.2"
|
||||
"59.186.123.155"
|
||||
"58.77.140.10"
|
||||
"222.239.194.164"
|
||||
"220.66.75.28"
|
||||
"27.165.137.52"
|
||||
"175.223.26.63"
|
||||
"121.141.47.152"
|
||||
"125.133.34.162"
|
||||
"147.46.92.177"
|
||||
"223.39.212.136"
|
||||
"147.46.91.141"
|
||||
"220.116.237.88"
|
||||
"106.101.11.6"
|
||||
"125.242.55.17"
|
||||
"106.101.128.56"
|
||||
"210.117.14.133"
|
||||
"156.59.47.101"
|
||||
"147.46.92.84"
|
||||
"118.221.140.2"
|
||||
"211.36.143.98"
|
||||
"106.101.136.2"
|
||||
"220.78.198.149"
|
||||
"147.46.92.182"
|
||||
"59.186.123.246"
|
||||
"112.163.158.164"
|
||||
"61.81.132.10"
|
||||
"117.111.21.63"
|
||||
"140.174.179.5"
|
||||
"121.151.201.239"
|
||||
"182.215.120.69"
|
||||
"218.144.247.3"
|
||||
"116.33.55.218"
|
||||
"147.46.91.137"
|
||||
"220.66.75.39"
|
||||
"220.117.8.14"
|
||||
"175.223.26.207"
|
||||
"116.34.125.138"
|
||||
"1.241.69.200"
|
||||
"211.234.194.156"
|
||||
"121.138.66.189"
|
||||
"210.105.187.27"
|
||||
"221.160.85.228"
|
||||
"58.237.207.130"
|
||||
"220.66.76.24"
|
||||
"211.234.192.81"
|
||||
"122.44.13.14"
|
||||
"220.66.76.11"
|
||||
"117.111.14.205"
|
||||
"125.248.23.189"
|
||||
"118.35.141.164"
|
||||
"210.99.28.101"
|
||||
"119.197.226.84"
|
||||
"211.234.203.248"
|
||||
"1.247.148.164"
|
||||
"61.39.66.227"
|
||||
"112.161.208.191"
|
||||
"175.213.178.48"
|
||||
"14.35.204.124"
|
||||
"211.234.188.148"
|
||||
"211.234.199.137"
|
||||
"1.238.107.99"
|
||||
"119.206.79.88"
|
||||
"147.46.35.247"
|
||||
"106.101.1.8"
|
||||
"223.39.176.122"
|
||||
"222.103.33.142"
|
||||
"110.70.47.11"
|
||||
"114.108.4.72"
|
||||
"147.46.91.177"
|
||||
"220.74.14.94"
|
||||
"211.234.194.250"
|
||||
"112.158.34.172"
|
||||
"112.170.16.221"
|
||||
"221.161.34.243"
|
||||
"59.1.172.178"
|
||||
"147.46.92.195"
|
||||
"121.148.130.230"
|
||||
"220.121.167.216"
|
||||
"115.89.238.220"
|
||||
"220.74.97.67"
|
||||
"222.99.115.189"
|
||||
"112.218.197.242"
|
||||
"156.59.47.105"
|
||||
"106.101.3.174"
|
||||
"114.108.4.74"
|
||||
"211.36.152.1"
|
||||
"110.70.54.119"
|
||||
"211.36.152.88"
|
||||
"211.218.206.137"
|
||||
"211.222.129.21"
|
||||
"218.148.114.204"
|
||||
"156.59.47.89"
|
||||
"156.59.47.102"
|
||||
"221.159.100.39"
|
||||
"210.206.95.190"
|
||||
"223.39.176.221"
|
||||
"125.137.245.86"
|
||||
"118.44.169.233"
|
||||
"211.185.247.104"
|
||||
"175.121.118.4"
|
||||
"125.136.144.163"
|
||||
"61.77.58.136"
|
||||
"222.110.160.71"
|
||||
"14.39.86.155"
|
||||
"211.36.158.211"
|
||||
"112.164.250.69"
|
||||
"1.235.111.48"
|
||||
"223.39.219.238"
|
||||
"211.36.159.30"
|
||||
"223.39.174.191"
|
||||
"106.248.204.158"
|
||||
"211.234.198.196"
|
||||
"114.203.88.3"
|
||||
"211.109.114.59"
|
||||
"125.130.142.245"
|
||||
"222.116.153.103"
|
||||
"58.227.62.3"
|
||||
"121.188.105.131"
|
||||
"121.188.98.4"
|
||||
"118.235.88.163"
|
||||
"110.35.50.202"
|
||||
"175.204.137.93"
|
||||
"14.45.119.89"
|
||||
"59.8.140.188"
|
||||
"59.0.82.13"
|
||||
"147.46.91.134"
|
||||
"211.36.152.213"
|
||||
"140.174.179.54"
|
||||
"147.46.91.155"
|
||||
"106.101.9.19"
|
||||
"147.46.91.136"
|
||||
"211.234.188.82"
|
||||
"223.39.219.31"
|
||||
"27.166.222.166"
|
||||
"118.44.178.75"
|
||||
"121.187.10.74"
|
||||
"210.204.169.25"
|
||||
"218.152.55.210"
|
||||
"118.39.26.132"
|
||||
"147.46.91.162"
|
||||
"61.82.142.93"
|
||||
"147.46.91.169"
|
||||
"147.46.91.148"
|
||||
"211.185.247.57"
|
||||
"121.66.158.246"
|
||||
"59.1.229.49"
|
||||
"119.207.27.75"
|
||||
"118.235.13.218"
|
||||
"106.101.11.31"
|
||||
"203.228.37.61"
|
||||
"121.177.226.29"
|
||||
"211.36.142.3"
|
||||
"175.197.85.244"
|
||||
"115.23.63.15"
|
||||
"220.149.222.50"
|
||||
"112.164.121.147"
|
||||
"112.170.151.209"
|
||||
"220.76.77.96"
|
||||
"59.27.94.66"
|
||||
"211.234.181.75"
|
||||
"220.70.176.152"
|
||||
"112.187.54.166"
|
||||
"220.66.76.85"
|
||||
"125.132.106.239"
|
||||
"147.46.92.153"
|
||||
"121.151.201.47"
|
||||
"211.211.117.44"
|
||||
"211.253.98.18"
|
||||
"223.39.218.48"
|
||||
"116.121.107.104"
|
||||
"110.70.54.144"
|
||||
"211.235.82.41"
|
||||
"211.193.241.72"
|
||||
"220.84.21.5"
|
||||
"147.46.91.140"
|
||||
"117.111.12.85"
|
||||
"39.125.46.181"
|
||||
"220.66.76.22"
|
||||
"223.53.98.220"
|
||||
"147.46.92.67"
|
||||
"211.36.136.245"
|
||||
"220.66.76.81"
|
||||
"222.114.41.134"
|
||||
"211.48.217.138"
|
||||
"42.27.139.140"
|
||||
"220.66.76.89"
|
||||
"175.223.19.27"
|
||||
"223.39.218.24"
|
||||
"147.46.91.146"
|
||||
"119.207.166.243"
|
||||
"14.53.188.21"
|
||||
"147.46.92.81"
|
||||
"147.46.91.149"
|
||||
"27.168.114.250"
|
||||
"118.37.166.161"
|
||||
"211.234.181.59"
|
||||
"125.179.210.215"
|
||||
"211.223.112.37"
|
||||
"211.235.74.158"
|
||||
"117.111.5.23"
|
||||
"106.101.2.36"
|
||||
"211.54.94.161"
|
||||
"42.20.3.222"
|
||||
"211.234.226.189"
|
||||
"211.234.180.75"
|
||||
"147.46.92.69"
|
||||
"211.234.203.21"
|
||||
"39.7.54.240"
|
||||
"210.93.112.123"
|
||||
"123.111.42.110"
|
||||
"119.204.117.36"
|
||||
"220.83.108.31"
|
||||
"223.39.218.193"
|
||||
"147.46.91.143"
|
||||
"222.107.72.242"
|
||||
"140.174.179.105"
|
||||
"220.66.75.91"
|
||||
"223.39.215.148"
|
||||
"147.46.91.166"
|
||||
"147.46.91.157"
|
||||
"121.187.162.200"
|
||||
"119.196.119.220"
|
||||
"211.108.72.139"
|
||||
"106.101.10.48"
|
||||
"211.196.60.173"
|
||||
"14.33.76.159"
|
||||
"59.3.140.180"
|
||||
"175.196.195.93"
|
||||
"156.59.47.84"
|
||||
"121.157.148.27"
|
||||
"211.54.213.71"
|
||||
"220.89.134.177"
|
||||
"106.101.2.74"
|
||||
"121.177.240.182"
|
||||
"222.121.148.227"
|
||||
"119.195.149.96"
|
||||
"211.235.66.78"
|
||||
"220.66.76.114"
|
||||
"14.51.248.237"
|
||||
"117.111.6.3"
|
||||
"220.66.76.249"
|
||||
"211.234.197.154"
|
||||
"218.232.187.68"
|
||||
"221.154.0.234"
|
||||
"211.219.72.198"
|
||||
"59.23.24.93"
|
||||
"112.167.22.71"
|
||||
"112.162.165.45"
|
||||
"61.98.205.242"
|
||||
"218.157.197.208"
|
||||
"59.186.123.237"
|
||||
"220.124.17.116"
|
||||
"121.161.151.28"
|
||||
"211.106.83.170"
|
||||
"220.66.76.21"
|
||||
"220.66.75.147"
|
||||
"220.121.253.183"
|
||||
"14.35.122.213"
|
||||
"211.234.202.228"
|
||||
"121.136.241.72"
|
||||
"221.165.252.99"
|
||||
"175.223.39.226"
|
||||
"106.101.1.169"
|
||||
"59.22.166.228"
|
||||
"118.235.74.207"
|
||||
"218.153.99.248"
|
||||
"211.169.233.104"
|
||||
"58.73.175.11"
|
||||
"175.210.233.213"
|
||||
"121.188.1.125"
|
||||
"211.234.227.38"
|
||||
"116.121.101.233"
|
||||
"211.234.201.135"
|
||||
"147.46.91.88"
|
||||
"125.242.55.15"
|
||||
"211.234.227.148"
|
||||
"180.65.219.52"
|
||||
"112.167.22.15"
|
||||
"222.118.36.105"
|
||||
"220.93.249.247"
|
||||
"61.85.177.75"
|
||||
"220.93.204.199"
|
||||
"211.234.204.13"
|
||||
"211.234.200.217"
|
||||
"121.169.114.68"
|
||||
"220.66.76.26"
|
||||
"223.39.219.47"
|
||||
"220.78.14.27"
|
||||
"59.2.190.227"
|
||||
"58.236.57.152"
|
||||
"175.194.216.115"
|
||||
"210.222.164.40"
|
||||
"14.36.217.161"
|
||||
"61.78.80.148"
|
||||
"147.46.91.142"
|
||||
"59.21.93.51"
|
||||
"112.166.253.199"
|
||||
"121.66.57.91"
|
||||
"211.234.203.150"
|
||||
"168.126.136.68"
|
||||
"106.101.2.252"
|
||||
"140.174.179.37"
|
||||
"49.175.164.136"
|
||||
"59.11.2.104"
|
||||
"223.39.202.254"
|
||||
"183.100.80.51"
|
||||
"42.19.21.126"
|
||||
"220.66.76.98"
|
||||
"219.251.6.155"
|
||||
"121.135.117.130"
|
||||
"112.186.236.215"
|
||||
"14.52.96.21"
|
||||
"211.36.145.170"
|
||||
"118.43.43.39"
|
||||
"222.102.162.27"
|
||||
"211.234.204.209"
|
||||
"115.138.239.252"
|
||||
"223.39.207.59"
|
||||
"110.70.47.253"
|
||||
"147.46.92.175"
|
||||
"211.235.82.89"
|
||||
"218.145.201.114"
|
||||
"169.211.153.84"
|
||||
"211.54.188.233"
|
||||
"140.174.179.7"
|
||||
"121.149.3.47"
|
||||
"118.42.56.19"
|
||||
"211.234.200.239"
|
||||
"14.35.229.40"
|
||||
"222.98.49.97"
|
||||
"14.53.69.245"
|
||||
"220.66.76.79"
|
||||
"147.46.92.93"
|
||||
"211.196.60.230"
|
||||
"223.38.94.117"
|
||||
"27.179.218.211"
|
||||
"211.170.25.65"
|
||||
"119.203.157.49"
|
||||
"220.66.76.91"
|
||||
"118.235.74.19"
|
||||
"61.43.126.201"
|
||||
"220.66.75.120"
|
||||
"59.26.59.41"
|
||||
"118.235.81.217"
|
||||
"147.46.92.179"
|
||||
"1.240.55.8"
|
||||
"220.71.159.105"
|
||||
"1.217.176.218"
|
||||
"114.71.128.87"
|
||||
"124.50.176.34"
|
||||
"147.46.91.131"
|
||||
"108.181.53.234"
|
||||
"211.198.89.114"
|
||||
"116.125.137.239"
|
||||
"183.98.129.158"
|
||||
"59.13.18.98"
|
||||
"147.47.202.18"
|
||||
"211.234.205.141"
|
||||
"58.150.67.157"
|
||||
"220.66.76.187"
|
||||
"211.215.11.165"
|
||||
"175.209.199.226"
|
||||
"211.234.206.165"
|
||||
"106.252.47.68"
|
||||
"59.19.225.21"
|
||||
"121.185.248.189"
|
||||
"110.35.154.135"
|
||||
"112.173.236.204"
|
||||
"147.46.91.151"
|
||||
"211.177.40.198"
|
||||
"59.3.164.233"
|
||||
"106.101.133.232"
|
||||
"42.22.207.68"
|
||||
"14.47.241.67"
|
||||
"118.235.73.87"
|
||||
"59.26.225.202"
|
||||
"61.83.185.245"
|
||||
"121.169.114.207"
|
||||
"112.168.110.147"
|
||||
"147.46.92.113"
|
||||
"112.170.153.89"
|
||||
"211.34.121.59"
|
||||
"222.232.200.179"
|
||||
"147.46.92.171"
|
||||
"59.29.17.192"
|
||||
"222.103.222.144"
|
||||
"222.112.53.139"
|
||||
"147.46.91.160"
|
||||
"1.241.255.157"
|
||||
"147.46.91.129"
|
||||
"14.48.158.224"
|
||||
"211.39.65.160"
|
||||
"116.124.153.130"
|
||||
"223.57.95.239"
|
||||
"106.101.3.193"
|
||||
"211.50.13.130"
|
||||
"218.150.119.207"
|
||||
"221.156.11.154"
|
||||
"211.218.253.1"
|
||||
"220.116.71.64"
|
||||
"147.46.92.173"
|
||||
"210.204.169.106"
|
||||
"59.0.165.173"
|
||||
"124.63.32.14"
|
||||
"211.234.180.120"
|
||||
"140.174.179.38"
|
||||
"211.235.91.16"
|
||||
"220.66.75.129"
|
||||
"147.46.91.61"
|
||||
"175.223.34.69"
|
||||
"119.193.183.73"
|
||||
"1.212.25.154"
|
||||
"106.101.0.26"
|
||||
"14.48.58.175"
|
||||
"203.231.144.61"
|
||||
"211.235.82.193"
|
||||
"59.28.27.189"
|
||||
"222.101.203.48"
|
||||
"175.206.14.81"
|
||||
"220.66.76.23"
|
||||
"211.234.192.25"
|
||||
"121.139.197.234"
|
||||
"147.46.91.158"
|
||||
"118.45.86.71"
|
||||
"121.162.82.93"
|
||||
"121.136.197.44"
|
||||
"211.228.5.235"
|
||||
"220.66.76.19"
|
||||
"147.46.92.71"
|
||||
"220.125.6.119"
|
||||
"59.21.195.163"
|
||||
"106.101.128.18"
|
||||
"59.12.200.34"
|
||||
"211.231.46.88"
|
||||
"147.46.91.139"
|
||||
"220.66.75.25"
|
||||
"123.214.63.222"
|
||||
"118.235.85.152"
|
||||
"59.186.123.249"
|
||||
"121.190.157.218"
|
||||
"223.39.218.74"
|
||||
"121.143.215.121"
|
||||
"61.254.28.177"
|
||||
"61.98.205.243"
|
||||
"115.138.209.44"
|
||||
"211.234.195.169"
|
||||
"59.186.123.226"
|
||||
"211.108.69.179"
|
||||
"27.163.144.150"
|
||||
"14.34.247.68"
|
||||
"121.185.152.100"
|
||||
"140.174.179.56"
|
||||
"118.223.231.46"
|
||||
"175.208.232.133"
|
||||
"147.46.91.175"
|
||||
"175.223.39.20"
|
||||
"121.124.88.96"
|
||||
"140.174.179.102"
|
||||
"222.98.163.179"
|
||||
"119.207.37.231"
|
||||
"147.46.91.165"
|
||||
"59.17.243.62"
|
||||
"14.46.56.26"
|
||||
"59.26.225.230"
|
||||
"42.22.217.197"
|
||||
"125.186.69.146"
|
||||
"211.234.199.135"
|
||||
"221.150.16.242"
|
||||
"121.124.172.71"
|
||||
"147.46.91.173"
|
||||
"147.46.92.152"
|
||||
"108.181.53.228"
|
||||
"220.66.76.17"
|
||||
"1.226.226.50"
|
||||
"223.39.218.225"
|
||||
"147.46.92.176"
|
||||
"58.79.5.215"
|
||||
"125.131.30.128"
|
||||
"61.75.98.148"
|
||||
"114.71.128.111"
|
||||
"211.223.30.241"
|
||||
"119.207.221.210"
|
||||
|
88099
initial_data/user_program_info_init_20251208.csv
Normal file
88099
initial_data/user_program_info_init_20251208.csv
Normal file
File diff suppressed because it is too large
Load Diff
@@ -2,17 +2,33 @@ package geo
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/oschwald/geoip2-golang"
|
||||
)
|
||||
|
||||
// ErrInvalidIP is returned when an IP cannot be parsed.
|
||||
var ErrInvalidIP = errors.New("invalid ip address")
|
||||
|
||||
type Resolver struct {
|
||||
db *geoip2.Reader
|
||||
// ErrNotFound is returned when a backend cannot resolve the IP.
|
||||
var ErrNotFound = errors.New("location not found")
|
||||
|
||||
type Backend string
|
||||
|
||||
const (
|
||||
BackendMMDB Backend = "mmdb"
|
||||
BackendPostgres Backend = "postgres"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
Backend Backend
|
||||
MMDBPath string
|
||||
DatabaseURL string
|
||||
LookupQuery string
|
||||
}
|
||||
|
||||
type Resolver interface {
|
||||
Lookup(string) (Location, error)
|
||||
Close() error
|
||||
}
|
||||
|
||||
type Location struct {
|
||||
@@ -25,56 +41,23 @@ type Location struct {
|
||||
Longitude float64
|
||||
}
|
||||
|
||||
func NewResolver(dbPath string) (*Resolver, error) {
|
||||
if dbPath == "" {
|
||||
return nil, errors.New("db path is required")
|
||||
func NewResolver(cfg Config) (Resolver, error) {
|
||||
switch cfg.Backend {
|
||||
case "", BackendMMDB:
|
||||
return newMMDBResolver(cfg.MMDBPath)
|
||||
case BackendPostgres:
|
||||
return newPostgresResolver(cfg.DatabaseURL, cfg.LookupQuery)
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported backend %q", cfg.Backend)
|
||||
}
|
||||
}
|
||||
|
||||
db, err := geoip2.Open(dbPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Resolver{db: db}, nil
|
||||
}
|
||||
|
||||
func (r *Resolver) Close() error {
|
||||
return r.db.Close()
|
||||
}
|
||||
|
||||
func (r *Resolver) Lookup(ipStr string) (Location, error) {
|
||||
ip := net.ParseIP(ipStr)
|
||||
if ip == nil {
|
||||
return Location{}, ErrInvalidIP
|
||||
}
|
||||
|
||||
record, err := r.db.City(ip)
|
||||
if err != nil {
|
||||
return Location{}, err
|
||||
}
|
||||
|
||||
country := record.Country.Names["en"]
|
||||
region := ""
|
||||
if len(record.Subdivisions) > 0 {
|
||||
region = record.Subdivisions[0].Names["en"]
|
||||
}
|
||||
|
||||
city := record.City.Names["en"]
|
||||
|
||||
addressParts := make([]string, 0, 3)
|
||||
for _, part := range []string{city, region, country} {
|
||||
func buildAddress(parts ...string) string {
|
||||
addressParts := make([]string, 0, len(parts))
|
||||
for _, part := range parts {
|
||||
if part != "" {
|
||||
addressParts = append(addressParts, part)
|
||||
}
|
||||
}
|
||||
|
||||
return Location{
|
||||
IP: ip.String(),
|
||||
Country: country,
|
||||
Region: region,
|
||||
City: city,
|
||||
Address: strings.Join(addressParts, ", "),
|
||||
Latitude: record.Location.Latitude,
|
||||
Longitude: record.Location.Longitude,
|
||||
}, nil
|
||||
return strings.Join(addressParts, ", ")
|
||||
}
|
||||
|
||||
59
internal/geo/resolver_mmdb.go
Normal file
59
internal/geo/resolver_mmdb.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package geo
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
|
||||
"github.com/oschwald/geoip2-golang"
|
||||
)
|
||||
|
||||
type mmdbResolver struct {
|
||||
db *geoip2.Reader
|
||||
}
|
||||
|
||||
func newMMDBResolver(dbPath string) (Resolver, error) {
|
||||
if dbPath == "" {
|
||||
return nil, errors.New("mmdb path is required")
|
||||
}
|
||||
|
||||
db, err := geoip2.Open(dbPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &mmdbResolver{db: db}, nil
|
||||
}
|
||||
|
||||
func (r *mmdbResolver) Close() error {
|
||||
return r.db.Close()
|
||||
}
|
||||
|
||||
func (r *mmdbResolver) Lookup(ipStr string) (Location, error) {
|
||||
ip := net.ParseIP(ipStr)
|
||||
if ip == nil {
|
||||
return Location{}, ErrInvalidIP
|
||||
}
|
||||
|
||||
record, err := r.db.City(ip)
|
||||
if err != nil {
|
||||
return Location{}, err
|
||||
}
|
||||
|
||||
country := record.Country.Names["en"]
|
||||
region := ""
|
||||
if len(record.Subdivisions) > 0 {
|
||||
region = record.Subdivisions[0].Names["en"]
|
||||
}
|
||||
|
||||
city := record.City.Names["en"]
|
||||
|
||||
return Location{
|
||||
IP: ip.String(),
|
||||
Country: country,
|
||||
Region: region,
|
||||
City: city,
|
||||
Address: buildAddress(city, region, country),
|
||||
Latitude: record.Location.Latitude,
|
||||
Longitude: record.Location.Longitude,
|
||||
}, nil
|
||||
}
|
||||
98
internal/geo/resolver_postgres.go
Normal file
98
internal/geo/resolver_postgres.go
Normal file
@@ -0,0 +1,98 @@
|
||||
package geo
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
_ "github.com/jackc/pgx/v5/stdlib"
|
||||
)
|
||||
|
||||
const defaultLookupQuery = `
|
||||
SELECT
|
||||
ip::text,
|
||||
country,
|
||||
region,
|
||||
city,
|
||||
latitude,
|
||||
longitude
|
||||
FROM geoip.lookup_city($1);
|
||||
`
|
||||
|
||||
type postgresResolver struct {
|
||||
db *sql.DB
|
||||
lookupQuery string
|
||||
}
|
||||
|
||||
func newPostgresResolver(databaseURL, lookupQuery string) (Resolver, error) {
|
||||
if databaseURL == "" {
|
||||
return nil, errors.New("database url is required for postgres backend")
|
||||
}
|
||||
|
||||
db, err := sql.Open("pgx", databaseURL)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
db.SetMaxOpenConns(10)
|
||||
db.SetMaxIdleConns(2)
|
||||
db.SetConnMaxIdleTime(5 * time.Minute)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
if err := db.PingContext(ctx); err != nil {
|
||||
_ = db.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if lookupQuery == "" {
|
||||
lookupQuery = defaultLookupQuery
|
||||
}
|
||||
|
||||
return &postgresResolver{
|
||||
db: db,
|
||||
lookupQuery: lookupQuery,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (r *postgresResolver) Close() error {
|
||||
return r.db.Close()
|
||||
}
|
||||
|
||||
func (r *postgresResolver) Lookup(ipStr string) (Location, error) {
|
||||
ip := net.ParseIP(ipStr)
|
||||
if ip == nil {
|
||||
return Location{}, ErrInvalidIP
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
row := r.db.QueryRowContext(ctx, r.lookupQuery, ip.String())
|
||||
|
||||
var (
|
||||
resolvedIP string
|
||||
country, region sql.NullString
|
||||
city sql.NullString
|
||||
latitude, longitude sql.NullFloat64
|
||||
)
|
||||
|
||||
if err := row.Scan(&resolvedIP, &country, ®ion, &city, &latitude, &longitude); err != nil {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return Location{}, ErrNotFound
|
||||
}
|
||||
return Location{}, err
|
||||
}
|
||||
|
||||
return Location{
|
||||
IP: resolvedIP,
|
||||
Country: country.String,
|
||||
Region: region.String,
|
||||
City: city.String,
|
||||
Address: buildAddress(city.String, region.String, country.String),
|
||||
Latitude: latitude.Float64,
|
||||
Longitude: longitude.Float64,
|
||||
}, nil
|
||||
}
|
||||
30
internal/geo/resolver_postgres_test.go
Normal file
30
internal/geo/resolver_postgres_test.go
Normal file
@@ -0,0 +1,30 @@
|
||||
package geo
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPostgresResolverLookup(t *testing.T) {
|
||||
dsn := os.Getenv("GEOIP_TEST_DATABASE_URL")
|
||||
if dsn == "" {
|
||||
t.Skip("GEOIP_TEST_DATABASE_URL not set; skipping Postgres integration test")
|
||||
}
|
||||
|
||||
resolver, err := NewResolver(Config{
|
||||
Backend: BackendPostgres,
|
||||
DatabaseURL: dsn,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to init postgres resolver: %v", err)
|
||||
}
|
||||
defer resolver.Close()
|
||||
|
||||
loc, err := resolver.Lookup("1.1.1.1")
|
||||
if err != nil {
|
||||
t.Fatalf("lookup failed: %v", err)
|
||||
}
|
||||
if loc.IP == "" {
|
||||
t.Fatalf("expected resolved IP, got empty")
|
||||
}
|
||||
}
|
||||
@@ -12,7 +12,10 @@ func TestLookupValidIP(t *testing.T) {
|
||||
t.Skipf("mmdb not available at %s: %v", dbPath, err)
|
||||
}
|
||||
|
||||
resolver, err := NewResolver(dbPath)
|
||||
resolver, err := NewResolver(Config{
|
||||
Backend: BackendMMDB,
|
||||
MMDBPath: dbPath,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open db: %v", err)
|
||||
}
|
||||
@@ -26,10 +29,6 @@ func TestLookupValidIP(t *testing.T) {
|
||||
if loc.IP != "1.1.1.1" {
|
||||
t.Errorf("unexpected IP: %s", loc.IP)
|
||||
}
|
||||
// Ensure coordinates are populated for sanity.
|
||||
if loc.Latitude == 0 && loc.Longitude == 0 {
|
||||
t.Errorf("expected non-zero coordinates, got lat=%f lon=%f", loc.Latitude, loc.Longitude)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLookupInvalidIP(t *testing.T) {
|
||||
@@ -38,7 +37,10 @@ func TestLookupInvalidIP(t *testing.T) {
|
||||
t.Skipf("mmdb not available at %s: %v", dbPath, err)
|
||||
}
|
||||
|
||||
resolver, err := NewResolver(dbPath)
|
||||
resolver, err := NewResolver(Config{
|
||||
Backend: BackendMMDB,
|
||||
MMDBPath: dbPath,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open db: %v", err)
|
||||
}
|
||||
|
||||
35
internal/importer/helpers.go
Normal file
35
internal/importer/helpers.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package importer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
)
|
||||
|
||||
// LatestID returns the maximum id in the replica table.
|
||||
func LatestID(ctx context.Context, conn *pgx.Conn, schema, table string) (int64, error) {
|
||||
var id sql.NullInt64
|
||||
query := fmt.Sprintf("SELECT MAX(id) FROM %s", pgx.Identifier{schema, table}.Sanitize())
|
||||
if err := conn.QueryRow(ctx, query).Scan(&id); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if !id.Valid {
|
||||
return 0, nil
|
||||
}
|
||||
return id.Int64, nil
|
||||
}
|
||||
|
||||
// CountUpToID returns the number of rows with id <= maxID.
|
||||
func CountUpToID(ctx context.Context, conn *pgx.Conn, schema, table string, maxID int64) (int64, error) {
|
||||
var count sql.NullInt64
|
||||
query := fmt.Sprintf("SELECT COUNT(*) FROM %s WHERE id <= $1", pgx.Identifier{schema, table}.Sanitize())
|
||||
if err := conn.QueryRow(ctx, query, maxID).Scan(&count); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if !count.Valid {
|
||||
return 0, nil
|
||||
}
|
||||
return count.Int64, nil
|
||||
}
|
||||
493
internal/importer/user_program_info.go
Normal file
493
internal/importer/user_program_info.go
Normal file
@@ -0,0 +1,493 @@
|
||||
package importer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/csv"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultSchema = "public"
|
||||
ReplicaTable = "user_program_info_replica"
|
||||
)
|
||||
|
||||
var (
|
||||
kstLocation = func() *time.Location {
|
||||
loc, err := time.LoadLocation("Asia/Seoul")
|
||||
if err != nil {
|
||||
return time.FixedZone("KST", 9*60*60)
|
||||
}
|
||||
return loc
|
||||
}()
|
||||
userProgramColumns = []string{
|
||||
"id",
|
||||
"product_name",
|
||||
"login_id",
|
||||
"user_employee_id",
|
||||
"login_version",
|
||||
"login_public_ip",
|
||||
"login_local_ip",
|
||||
"user_company",
|
||||
"user_department",
|
||||
"user_position",
|
||||
"user_login_time",
|
||||
"created_at",
|
||||
"user_family_flag",
|
||||
}
|
||||
timeLayouts = []string{
|
||||
"2006-01-02 15:04:05.000",
|
||||
"2006-01-02 15:04:05",
|
||||
time.RFC3339,
|
||||
"2006-01-02T15:04:05.000Z07:00",
|
||||
}
|
||||
)
|
||||
|
||||
// EnsureUserProgramReplica ensures the target table exists, then imports one or more CSVs.
|
||||
// csvPath can point to a single file or a directory (all *.csv will be processed in name order).
|
||||
// Logs are written to logDir for every processed file.
|
||||
func EnsureUserProgramReplica(ctx context.Context, conn *pgx.Conn, csvPath, schema, logDir string) error {
|
||||
if schema == "" {
|
||||
schema = defaultSchema
|
||||
}
|
||||
if logDir == "" {
|
||||
logDir = "log"
|
||||
}
|
||||
|
||||
if err := ensureSchema(ctx, conn, schema); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := createReplicaTable(ctx, conn, schema, ReplicaTable); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
files, err := resolveCSVTargets(csvPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(files) == 0 {
|
||||
return fmt.Errorf("no csv files found at %s", csvPath)
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if err := importSingle(ctx, conn, file, schema, logDir); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ImportUserProgramUpdates imports all CSV files under updateDir (non-recursive) into an existing replica table.
|
||||
// Each file is processed independently; failure stops the sequence and logs the error.
|
||||
func ImportUserProgramUpdates(ctx context.Context, conn *pgx.Conn, updateDir, schema, logDir string) error {
|
||||
if updateDir == "" {
|
||||
return nil
|
||||
}
|
||||
files, err := resolveCSVTargets(updateDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(files) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if err := importSingle(ctx, conn, file, schema, logDir); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func tableExists(ctx context.Context, conn *pgx.Conn, schema, table string) (bool, error) {
|
||||
const q = `
|
||||
SELECT EXISTS (
|
||||
SELECT 1
|
||||
FROM information_schema.tables
|
||||
WHERE table_schema = $1 AND table_name = $2
|
||||
);`
|
||||
|
||||
var exists bool
|
||||
if err := conn.QueryRow(ctx, q, schema, table).Scan(&exists); err != nil {
|
||||
return false, err
|
||||
}
|
||||
return exists, nil
|
||||
}
|
||||
|
||||
func createReplicaTable(ctx context.Context, conn *pgx.Conn, schema, table string) error {
|
||||
identifier := pgx.Identifier{schema, table}.Sanitize()
|
||||
ddl := fmt.Sprintf(`
|
||||
CREATE TABLE IF NOT EXISTS %s (
|
||||
id bigint PRIMARY KEY,
|
||||
product_name text,
|
||||
login_id text,
|
||||
user_employee_id text,
|
||||
login_version text,
|
||||
login_public_ip text,
|
||||
login_local_ip text,
|
||||
user_company text,
|
||||
user_department text,
|
||||
user_position text,
|
||||
user_login_time timestamp,
|
||||
created_at timestamp,
|
||||
user_family_flag boolean
|
||||
);`, identifier)
|
||||
|
||||
_, err := conn.Exec(ctx, ddl)
|
||||
return err
|
||||
}
|
||||
|
||||
func ensureSchema(ctx context.Context, conn *pgx.Conn, schema string) error {
|
||||
if schema == "" {
|
||||
return nil
|
||||
}
|
||||
_, err := conn.Exec(ctx, fmt.Sprintf(`CREATE SCHEMA IF NOT EXISTS %s`, pgx.Identifier{schema}.Sanitize()))
|
||||
return err
|
||||
}
|
||||
|
||||
type importResult struct {
|
||||
rowsCopied int64
|
||||
rowsUpserted int64
|
||||
finishedAt time.Time
|
||||
}
|
||||
|
||||
func copyAndUpsertCSV(ctx context.Context, conn *pgx.Conn, path, schema, table string) (importResult, error) {
|
||||
res := importResult{}
|
||||
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return res, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
reader := csv.NewReader(f)
|
||||
reader.FieldsPerRecord = -1
|
||||
|
||||
header, err := reader.Read()
|
||||
if err != nil {
|
||||
return res, err
|
||||
}
|
||||
if len(header) != len(userProgramColumns) {
|
||||
return res, fmt.Errorf("unexpected column count in CSV: got %d, want %d", len(header), len(userProgramColumns))
|
||||
}
|
||||
|
||||
tx, err := conn.Begin(ctx)
|
||||
if err != nil {
|
||||
return res, err
|
||||
}
|
||||
defer func() {
|
||||
_ = tx.Rollback(ctx)
|
||||
}()
|
||||
|
||||
tempTable := fmt.Sprintf("%s_import_tmp_%d", table, time.Now().UnixNano())
|
||||
|
||||
if _, err := tx.Exec(ctx, fmt.Sprintf(`CREATE TEMP TABLE %s (LIKE %s INCLUDING ALL) ON COMMIT DROP;`, quoteIdent(tempTable), pgx.Identifier{schema, table}.Sanitize())); err != nil {
|
||||
return res, err
|
||||
}
|
||||
|
||||
source := &csvSource{
|
||||
reader: reader,
|
||||
}
|
||||
|
||||
copied, err := tx.CopyFrom(ctx, pgx.Identifier{tempTable}, userProgramColumns, source)
|
||||
if err != nil {
|
||||
return res, err
|
||||
}
|
||||
if copied == 0 {
|
||||
return res, errors.New("no rows were copied from CSV")
|
||||
}
|
||||
|
||||
quotedColumns := quoteColumns(userProgramColumns)
|
||||
upsertSQL := fmt.Sprintf(`
|
||||
INSERT INTO %s (%s)
|
||||
SELECT %s FROM %s
|
||||
ON CONFLICT (id) DO UPDATE SET
|
||||
product_name = EXCLUDED.product_name,
|
||||
login_id = EXCLUDED.login_id,
|
||||
user_employee_id = EXCLUDED.user_employee_id,
|
||||
login_version = EXCLUDED.login_version,
|
||||
login_public_ip = EXCLUDED.login_public_ip,
|
||||
login_local_ip = EXCLUDED.login_local_ip,
|
||||
user_company = EXCLUDED.user_company,
|
||||
user_department = EXCLUDED.user_department,
|
||||
user_position = EXCLUDED.user_position,
|
||||
user_login_time = EXCLUDED.user_login_time,
|
||||
created_at = EXCLUDED.created_at,
|
||||
user_family_flag = EXCLUDED.user_family_flag;
|
||||
`, pgx.Identifier{schema, table}.Sanitize(), strings.Join(quotedColumns, ", "), strings.Join(quotedColumns, ", "), quoteIdent(tempTable))
|
||||
|
||||
upsertRes, err := tx.Exec(ctx, upsertSQL)
|
||||
if err != nil {
|
||||
return res, err
|
||||
}
|
||||
|
||||
if err := tx.Commit(ctx); err != nil {
|
||||
return res, err
|
||||
}
|
||||
|
||||
res.rowsCopied = copied
|
||||
res.rowsUpserted = upsertRes.RowsAffected()
|
||||
res.finishedAt = time.Now()
|
||||
return res, nil
|
||||
}
|
||||
|
||||
type csvSource struct {
|
||||
reader *csv.Reader
|
||||
record []string
|
||||
err error
|
||||
}
|
||||
|
||||
func (s *csvSource) Next() bool {
|
||||
if s.err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
rec, err := s.reader.Read()
|
||||
if err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
return false
|
||||
}
|
||||
s.err = err
|
||||
return false
|
||||
}
|
||||
|
||||
s.record = rec
|
||||
return true
|
||||
}
|
||||
|
||||
func (s *csvSource) Values() ([]any, error) {
|
||||
if len(s.record) != len(userProgramColumns) {
|
||||
return nil, fmt.Errorf("unexpected record length: got %d, want %d", len(s.record), len(userProgramColumns))
|
||||
}
|
||||
|
||||
id, err := strconv.ParseInt(s.record[0], 10, 64)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parse id: %w", err)
|
||||
}
|
||||
|
||||
loginTime, err := parseTimestamp(s.record[10])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parse user_login_time: %w", err)
|
||||
}
|
||||
|
||||
createdAt, err := parseTimestamp(s.record[11])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parse created_at: %w", err)
|
||||
}
|
||||
|
||||
var familyFlag any
|
||||
if v := s.record[12]; v == "" {
|
||||
familyFlag = nil
|
||||
} else {
|
||||
switch v {
|
||||
case "1", "true", "TRUE", "t", "T":
|
||||
familyFlag = true
|
||||
case "0", "false", "FALSE", "f", "F":
|
||||
familyFlag = false
|
||||
default:
|
||||
parsed, err := strconv.ParseBool(v)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parse user_family_flag: %w", err)
|
||||
}
|
||||
familyFlag = parsed
|
||||
}
|
||||
}
|
||||
|
||||
return []any{
|
||||
id,
|
||||
nullOrString(s.record[1]),
|
||||
nullOrString(s.record[2]),
|
||||
nullOrString(s.record[3]),
|
||||
nullOrString(s.record[4]),
|
||||
nullOrString(s.record[5]),
|
||||
nullOrString(s.record[6]),
|
||||
nullOrString(s.record[7]),
|
||||
nullOrString(s.record[8]),
|
||||
nullOrString(s.record[9]),
|
||||
loginTime,
|
||||
createdAt,
|
||||
familyFlag,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *csvSource) Err() error {
|
||||
return s.err
|
||||
}
|
||||
|
||||
func parseTimestamp(raw string) (any, error) {
|
||||
if raw == "" {
|
||||
return nil, nil
|
||||
}
|
||||
for _, layout := range timeLayouts {
|
||||
if t, err := time.ParseInLocation(layout, raw, kstLocation); err == nil {
|
||||
return t, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("unsupported timestamp format: %s", raw)
|
||||
}
|
||||
|
||||
func nullOrString(val string) any {
|
||||
if val == "" {
|
||||
return nil
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
func importSingle(ctx context.Context, conn *pgx.Conn, csvPath, schema, logDir string) error {
|
||||
startedAt := time.Now()
|
||||
|
||||
res, err := copyAndUpsertCSV(ctx, conn, csvPath, schema, ReplicaTable)
|
||||
logStatus := "succeeded"
|
||||
logErrMsg := ""
|
||||
if err != nil {
|
||||
logStatus = "failed"
|
||||
logErrMsg = err.Error()
|
||||
}
|
||||
|
||||
_ = writeImportLog(logDir, importLog{
|
||||
StartedAt: startedAt,
|
||||
FinishedAt: res.finishedAt,
|
||||
CSVPath: csvPath,
|
||||
Status: logStatus,
|
||||
RowsCopied: res.rowsCopied,
|
||||
RowsUpserted: res.rowsUpserted,
|
||||
Error: logErrMsg,
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func resolveCSVTargets(path string) ([]string, error) {
|
||||
info, err := os.Stat(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if info.IsDir() {
|
||||
entries, err := os.ReadDir(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var files []string
|
||||
for _, e := range entries {
|
||||
if e.IsDir() {
|
||||
continue
|
||||
}
|
||||
if strings.HasSuffix(strings.ToLower(e.Name()), ".csv") {
|
||||
files = append(files, filepath.Join(path, e.Name()))
|
||||
}
|
||||
}
|
||||
slices.Sort(files)
|
||||
return files, nil
|
||||
}
|
||||
return []string{path}, nil
|
||||
}
|
||||
|
||||
type importLog struct {
|
||||
StartedAt time.Time
|
||||
FinishedAt time.Time
|
||||
CSVPath string
|
||||
Status string
|
||||
RowsCopied int64
|
||||
RowsUpserted int64
|
||||
Error string
|
||||
LatestDate time.Time
|
||||
}
|
||||
|
||||
func writeImportLog(logDir string, entry importLog) error {
|
||||
if err := os.MkdirAll(logDir, 0o755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
now := time.Now().In(kstLocation)
|
||||
if entry.StartedAt.IsZero() {
|
||||
entry.StartedAt = now
|
||||
}
|
||||
filename := fmt.Sprintf("user_program_import_%s.log", now.Format("20060102_150405"))
|
||||
path := filepath.Join(logDir, filename)
|
||||
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
start := entry.StartedAt.In(kstLocation).Format(time.RFC3339)
|
||||
finish := ""
|
||||
if !entry.FinishedAt.IsZero() {
|
||||
finish = entry.FinishedAt.In(kstLocation).Format(time.RFC3339)
|
||||
}
|
||||
|
||||
lines := []string{
|
||||
fmt.Sprintf("status=%s", entry.Status),
|
||||
fmt.Sprintf("csv_path=%s", entry.CSVPath),
|
||||
fmt.Sprintf("started_at=%s", start),
|
||||
fmt.Sprintf("finished_at=%s", finish),
|
||||
fmt.Sprintf("rows_copied=%d", entry.RowsCopied),
|
||||
fmt.Sprintf("rows_upserted=%d", entry.RowsUpserted),
|
||||
}
|
||||
if entry.Error != "" {
|
||||
lines = append(lines, fmt.Sprintf("error=%s", entry.Error))
|
||||
}
|
||||
if !entry.LatestDate.IsZero() {
|
||||
lines = append(lines, fmt.Sprintf("latest_date=%s", entry.LatestDate.In(kstLocation).Format("2006-01-02")))
|
||||
}
|
||||
|
||||
for _, line := range lines {
|
||||
if _, err := f.WriteString(line + "\n"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func quoteIdent(s string) string {
|
||||
return `"` + strings.ReplaceAll(s, `"`, `""`) + `"`
|
||||
}
|
||||
|
||||
func quoteColumns(cols []string) []string {
|
||||
out := make([]string, len(cols))
|
||||
for i, c := range cols {
|
||||
out[i] = quoteIdent(c)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func LatestCreatedDate(ctx context.Context, conn *pgx.Conn, schema, table string) (time.Time, error) {
|
||||
var ts sql.NullTime
|
||||
query := fmt.Sprintf("SELECT MAX(created_at) FROM %s", pgx.Identifier{schema, table}.Sanitize())
|
||||
if err := conn.QueryRow(ctx, query).Scan(&ts); err != nil {
|
||||
return time.Time{}, err
|
||||
}
|
||||
if !ts.Valid {
|
||||
return time.Time{}, nil
|
||||
}
|
||||
return truncateToKSTDate(ts.Time), nil
|
||||
}
|
||||
|
||||
func truncateToKSTDate(t time.Time) time.Time {
|
||||
kst := t.In(kstLocation)
|
||||
return time.Date(kst.Year(), kst.Month(), kst.Day(), 0, 0, 0, 0, kstLocation)
|
||||
}
|
||||
|
||||
func dateFromFilename(path string) (time.Time, error) {
|
||||
base := filepath.Base(path)
|
||||
re := regexp.MustCompile(`(\d{8})`)
|
||||
match := re.FindStringSubmatch(base)
|
||||
if len(match) < 2 {
|
||||
return time.Time{}, fmt.Errorf("no date in filename: %s", base)
|
||||
}
|
||||
return time.ParseInLocation("20060102", match[1], kstLocation)
|
||||
}
|
||||
92
internal/schedule/scheduler.go
Normal file
92
internal/schedule/scheduler.go
Normal file
@@ -0,0 +1,92 @@
|
||||
package schedule
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"time"
|
||||
|
||||
"github.com/robfig/cron/v3"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
CronExpr string
|
||||
Command string
|
||||
Args []string
|
||||
Logger *log.Logger
|
||||
}
|
||||
|
||||
type Scheduler struct {
|
||||
cron *cron.Cron
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
// Start configures and starts the cron scheduler. It runs the given script at the
|
||||
// specified cron expression (KST). The caller owns the returned scheduler and must
|
||||
// call Stop on shutdown.
|
||||
func Start(cfg Config) (*Scheduler, error) {
|
||||
if cfg.CronExpr == "" {
|
||||
return nil, errors.New("CronExpr is required")
|
||||
}
|
||||
if cfg.Command == "" {
|
||||
return nil, errors.New("Command is required")
|
||||
}
|
||||
|
||||
if cfg.Logger == nil {
|
||||
cfg.Logger = log.Default()
|
||||
}
|
||||
|
||||
kst, err := time.LoadLocation("Asia/Seoul")
|
||||
if err != nil {
|
||||
kst = time.FixedZone("KST", 9*60*60)
|
||||
}
|
||||
|
||||
parser := cron.NewParser(cron.Minute | cron.Hour | cron.Dom | cron.Month | cron.Dow)
|
||||
spec, err := parser.Parse(cfg.CronExpr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c := cron.New(cron.WithLocation(kst), cron.WithParser(parser))
|
||||
c.Schedule(spec, cron.FuncJob(func() {
|
||||
runCommand(cfg.Logger, cfg.Command, cfg.Args...)
|
||||
}))
|
||||
|
||||
c.Start()
|
||||
|
||||
cfg.Logger.Printf("scheduler started with cron=%s command=%s args=%v tz=%s", cfg.CronExpr, cfg.Command, cfg.Args, kst)
|
||||
|
||||
return &Scheduler{
|
||||
cron: c,
|
||||
logger: cfg.Logger,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Stop halts the scheduler. It does not cancel a currently running job.
|
||||
func (s *Scheduler) Stop() context.Context {
|
||||
if s == nil || s.cron == nil {
|
||||
return context.Background()
|
||||
}
|
||||
return s.cron.Stop()
|
||||
}
|
||||
|
||||
func runCommand(logger *log.Logger, command string, args ...string) {
|
||||
start := time.Now()
|
||||
logger.Printf("scheduler: running %s %v", command, args)
|
||||
|
||||
cmd := exec.Command(command, args...)
|
||||
cmd.Env = os.Environ()
|
||||
out, err := cmd.CombinedOutput()
|
||||
duration := time.Since(start)
|
||||
|
||||
if len(out) > 0 {
|
||||
logger.Printf("scheduler: output:\n%s", string(out))
|
||||
}
|
||||
if err != nil {
|
||||
logger.Printf("scheduler: %s failed after %s: %v", command, duration, err)
|
||||
return
|
||||
}
|
||||
logger.Printf("scheduler: %s completed in %s", command, duration)
|
||||
}
|
||||
186
internal/userprogram/config.go
Normal file
186
internal/userprogram/config.go
Normal file
@@ -0,0 +1,186 @@
|
||||
package userprogram
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"geoip-rest/internal/geo"
|
||||
)
|
||||
|
||||
type Backend string
|
||||
|
||||
const (
|
||||
BackendMMDB Backend = "mmdb"
|
||||
BackendPostgres Backend = "postgres"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultUpdateDir = "/update_data"
|
||||
DefaultLogDir = "/log"
|
||||
DefaultSchema = "public"
|
||||
DefaultInitialCSV = "/initial_data/user_program_info_init_20251208.csv"
|
||||
DefaultTable = "user_program_info"
|
||||
DefaultDatabase = "user_program_info"
|
||||
defaultTargetRange = "20060102"
|
||||
)
|
||||
|
||||
type MySQLConfig struct {
|
||||
Host string
|
||||
Port int
|
||||
User string
|
||||
Password string
|
||||
Database string
|
||||
Table string
|
||||
}
|
||||
|
||||
type Paths struct {
|
||||
UpdateDir string
|
||||
LogDir string
|
||||
InitialCSV string
|
||||
Schema string
|
||||
}
|
||||
|
||||
func NewMySQLConfigFromEnv() (MySQLConfig, error) {
|
||||
port, err := strconv.Atoi(env("USER_PROGRAM_INFO_PORT", "3306"))
|
||||
if err != nil {
|
||||
return MySQLConfig{}, fmt.Errorf("invalid USER_PROGRAM_INFO_PORT: %w", err)
|
||||
}
|
||||
|
||||
host, err := envRequiredValue("USER_PROGRAM_INFO_HOST")
|
||||
if err != nil {
|
||||
return MySQLConfig{}, err
|
||||
}
|
||||
user, err := envRequiredValue("USER_PROGRAM_INFO_USERNAME")
|
||||
if err != nil {
|
||||
return MySQLConfig{}, err
|
||||
}
|
||||
password, err := envRequiredValue("USER_PROGRAM_INFO_PASSWORD")
|
||||
if err != nil {
|
||||
return MySQLConfig{}, err
|
||||
}
|
||||
|
||||
cfg := MySQLConfig{
|
||||
Host: host,
|
||||
Port: port,
|
||||
User: user,
|
||||
Password: password,
|
||||
Database: env("USER_PROGRAM_INFO_DB", DefaultDatabase),
|
||||
Table: env("USER_PROGRAM_INFO_TABLE", DefaultTable),
|
||||
}
|
||||
if cfg.Host == "" || cfg.User == "" || cfg.Password == "" {
|
||||
return MySQLConfig{}, fmt.Errorf("mysql connection envs are required")
|
||||
}
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func NewPathsFromEnv() (Paths, error) {
|
||||
schema := env("USER_PROGRAM_INFO_SCHEMA", env("POSTGRES_SCHEMA", DefaultSchema))
|
||||
paths := Paths{
|
||||
UpdateDir: env("USER_PROGRAM_UPDATE_DIR", DefaultUpdateDir),
|
||||
LogDir: env("USER_PROGRAM_IMPORT_LOG_DIR", DefaultLogDir),
|
||||
InitialCSV: env("USER_PROGRAM_INFO_CSV", DefaultInitialCSV),
|
||||
Schema: schema,
|
||||
}
|
||||
|
||||
for _, dir := range []string{paths.UpdateDir, paths.LogDir} {
|
||||
if dir == "" {
|
||||
continue
|
||||
}
|
||||
if err := os.MkdirAll(dir, 0o755); err != nil {
|
||||
return Paths{}, fmt.Errorf("create dir %s: %w", dir, err)
|
||||
}
|
||||
}
|
||||
return paths, nil
|
||||
}
|
||||
|
||||
func BackendFromEnv() Backend {
|
||||
val := strings.ToLower(env("GEOIP_BACKEND", string(BackendMMDB)))
|
||||
switch val {
|
||||
case string(BackendMMDB), "":
|
||||
return BackendMMDB
|
||||
case string(BackendPostgres):
|
||||
return BackendPostgres
|
||||
default:
|
||||
return BackendMMDB
|
||||
}
|
||||
}
|
||||
|
||||
func ResolveBackend(cfg geo.Config) (geo.Resolver, error) {
|
||||
switch Backend(cfg.Backend) {
|
||||
case BackendMMDB, "":
|
||||
if cfg.MMDBPath == "" {
|
||||
return nil, errors.New("MMDBPath required for mmdb backend")
|
||||
}
|
||||
return geo.NewResolver(geo.Config{
|
||||
Backend: geo.BackendMMDB,
|
||||
MMDBPath: cfg.MMDBPath,
|
||||
})
|
||||
case BackendPostgres:
|
||||
if cfg.DatabaseURL == "" {
|
||||
return nil, errors.New("DatabaseURL required for postgres backend")
|
||||
}
|
||||
return geo.NewResolver(geo.Config{
|
||||
Backend: geo.BackendPostgres,
|
||||
DatabaseURL: cfg.DatabaseURL,
|
||||
LookupQuery: cfg.LookupQuery,
|
||||
})
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported backend %s", cfg.Backend)
|
||||
}
|
||||
}
|
||||
|
||||
func ParseTargetDate(raw string) (time.Time, error) {
|
||||
if raw == "" {
|
||||
return yesterdayKST(), nil
|
||||
}
|
||||
t, err := time.ParseInLocation("2006-01-02", raw, kst())
|
||||
if err != nil {
|
||||
return time.Time{}, fmt.Errorf("invalid date %q (expected YYYY-MM-DD)", raw)
|
||||
}
|
||||
return t, nil
|
||||
}
|
||||
|
||||
func DateFromFilename(path string) (time.Time, error) {
|
||||
base := filepath.Base(path)
|
||||
re := regexp.MustCompile(`(\d{8})`)
|
||||
match := re.FindStringSubmatch(base)
|
||||
if len(match) < 2 {
|
||||
return time.Time{}, fmt.Errorf("no date in filename: %s", base)
|
||||
}
|
||||
return time.ParseInLocation(defaultTargetRange, match[1], kst())
|
||||
}
|
||||
|
||||
func yesterdayKST() time.Time {
|
||||
now := time.Now().In(kst())
|
||||
yesterday := now.AddDate(0, 0, -1)
|
||||
return time.Date(yesterday.Year(), yesterday.Month(), yesterday.Day(), 0, 0, 0, 0, kst())
|
||||
}
|
||||
|
||||
func kst() *time.Location {
|
||||
loc, err := time.LoadLocation("Asia/Seoul")
|
||||
if err != nil {
|
||||
return time.FixedZone("KST", 9*60*60)
|
||||
}
|
||||
return loc
|
||||
}
|
||||
|
||||
func env(key, fallback string) string {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
return v
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func envRequiredValue(key string) (string, error) {
|
||||
v := os.Getenv(key)
|
||||
if v == "" {
|
||||
return "", fmt.Errorf("%s is required", key)
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
263
internal/userprogram/dumper.go
Normal file
263
internal/userprogram/dumper.go
Normal file
@@ -0,0 +1,263 @@
|
||||
package userprogram
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/csv"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/go-sql-driver/mysql"
|
||||
)
|
||||
|
||||
type Dumper struct {
|
||||
cfg MySQLConfig
|
||||
updateDir string
|
||||
db *sql.DB
|
||||
}
|
||||
|
||||
func NewDumper(cfg MySQLConfig, updateDir string) (*Dumper, error) {
|
||||
if updateDir == "" {
|
||||
updateDir = DefaultUpdateDir
|
||||
}
|
||||
if err := os.MkdirAll(updateDir, 0o755); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dsn := (&mysql.Config{
|
||||
User: cfg.User,
|
||||
Passwd: cfg.Password,
|
||||
Net: "tcp",
|
||||
Addr: netAddr(cfg.Host, cfg.Port),
|
||||
DBName: cfg.Database,
|
||||
Params: map[string]string{"parseTime": "true", "loc": "UTC", "charset": "utf8mb4"},
|
||||
AllowNativePasswords: true,
|
||||
}).FormatDSN()
|
||||
|
||||
db, err := sql.Open("mysql", dsn)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open mysql: %w", err)
|
||||
}
|
||||
db.SetMaxOpenConns(5)
|
||||
db.SetMaxIdleConns(2)
|
||||
db.SetConnMaxIdleTime(5 * time.Minute)
|
||||
|
||||
if _, err := db.Exec("SET time_zone = '+00:00'"); err != nil {
|
||||
_ = db.Close()
|
||||
return nil, fmt.Errorf("set timezone: %w", err)
|
||||
}
|
||||
|
||||
return &Dumper{
|
||||
cfg: cfg,
|
||||
updateDir: updateDir,
|
||||
db: db,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *Dumper) Close() error {
|
||||
if d.db == nil {
|
||||
return nil
|
||||
}
|
||||
return d.db.Close()
|
||||
}
|
||||
|
||||
// MaxIDUntil returns the maximum id with created_at up to and including cutoff (KST).
|
||||
func (d *Dumper) MaxIDUntil(ctx context.Context, cutoff time.Time) (int64, error) {
|
||||
query := fmt.Sprintf(`SELECT COALESCE(MAX(id), 0) FROM %s WHERE DATE(CONVERT_TZ(created_at, '+00:00', '+09:00')) <= ?`, d.cfg.Table)
|
||||
var maxID sql.NullInt64
|
||||
if err := d.db.QueryRowContext(ctx, query, cutoff.In(kst()).Format("2006-01-02")).Scan(&maxID); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if !maxID.Valid {
|
||||
return 0, nil
|
||||
}
|
||||
return maxID.Int64, nil
|
||||
}
|
||||
|
||||
// CountUpToID returns count(*) where id <= maxID in source.
|
||||
func (d *Dumper) CountUpToID(ctx context.Context, maxID int64) (int64, error) {
|
||||
query := fmt.Sprintf(`SELECT COUNT(*) FROM %s WHERE id <= ?`, d.cfg.Table)
|
||||
var count sql.NullInt64
|
||||
if err := d.db.QueryRowContext(ctx, query, maxID).Scan(&count); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if !count.Valid {
|
||||
return 0, nil
|
||||
}
|
||||
return count.Int64, nil
|
||||
}
|
||||
|
||||
// DumpRange exports rows with id in (startID, endID] to a CSV file.
|
||||
func (d *Dumper) DumpRange(ctx context.Context, startID, endID int64, label time.Time) (string, error) {
|
||||
if endID <= startID {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
query := fmt.Sprintf(`
|
||||
SELECT
|
||||
id,
|
||||
product_name,
|
||||
login_id,
|
||||
user_employee_id,
|
||||
login_version,
|
||||
login_public_ip,
|
||||
login_local_ip,
|
||||
user_company,
|
||||
user_department,
|
||||
user_position,
|
||||
user_login_time,
|
||||
created_at,
|
||||
user_family_flag
|
||||
FROM %s
|
||||
WHERE id > ? AND id <= ?
|
||||
ORDER BY id;`, d.cfg.Table)
|
||||
|
||||
rows, err := d.db.QueryContext(ctx, query, startID, endID)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("query: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
filename := fmt.Sprintf("user_program_info_%s.csv", label.In(kst()).Format(defaultTargetRange))
|
||||
outPath := filepath.Join(d.updateDir, filename)
|
||||
tmpPath := outPath + ".tmp"
|
||||
|
||||
f, err := os.Create(tmpPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
writer := csv.NewWriter(f)
|
||||
defer writer.Flush()
|
||||
|
||||
header := []string{
|
||||
"id",
|
||||
"product_name",
|
||||
"login_id",
|
||||
"user_employee_id",
|
||||
"login_version",
|
||||
"login_public_ip",
|
||||
"login_local_ip",
|
||||
"user_company",
|
||||
"user_department",
|
||||
"user_position",
|
||||
"user_login_time",
|
||||
"created_at",
|
||||
"user_family_flag",
|
||||
}
|
||||
if err := writer.Write(header); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
for rows.Next() {
|
||||
record, err := scanRow(rows)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if err := writer.Write(record); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
writer.Flush()
|
||||
if err := writer.Error(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if err := os.Rename(tmpPath, outPath); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return outPath, nil
|
||||
}
|
||||
|
||||
func scanRow(rows *sql.Rows) ([]string, error) {
|
||||
var (
|
||||
id sql.NullInt64
|
||||
productName sql.NullString
|
||||
loginID sql.NullString
|
||||
employeeID sql.NullString
|
||||
loginVersion sql.NullString
|
||||
loginPublicIP sql.NullString
|
||||
loginLocalIP sql.NullString
|
||||
userCompany sql.NullString
|
||||
userDepartment sql.NullString
|
||||
userPosition sql.NullString
|
||||
userLoginTime sql.NullString
|
||||
createdAt sql.NullString
|
||||
userFamilyFlag sql.NullString
|
||||
)
|
||||
|
||||
if err := rows.Scan(
|
||||
&id,
|
||||
&productName,
|
||||
&loginID,
|
||||
&employeeID,
|
||||
&loginVersion,
|
||||
&loginPublicIP,
|
||||
&loginLocalIP,
|
||||
&userCompany,
|
||||
&userDepartment,
|
||||
&userPosition,
|
||||
&userLoginTime,
|
||||
&createdAt,
|
||||
&userFamilyFlag,
|
||||
); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !id.Valid {
|
||||
return nil, fmt.Errorf("row missing id")
|
||||
}
|
||||
|
||||
return []string{
|
||||
strconv.FormatInt(id.Int64, 10),
|
||||
nullToString(productName),
|
||||
nullToString(loginID),
|
||||
nullToString(employeeID),
|
||||
nullToString(loginVersion),
|
||||
nullToString(loginPublicIP),
|
||||
nullToString(loginLocalIP),
|
||||
nullToString(userCompany),
|
||||
nullToString(userDepartment),
|
||||
nullToString(userPosition),
|
||||
formatTimestamp(userLoginTime.String),
|
||||
formatTimestamp(createdAt.String),
|
||||
nullToString(userFamilyFlag),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func nullToString(v sql.NullString) string {
|
||||
if v.Valid {
|
||||
return v.String
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func netAddr(host string, port int) string {
|
||||
return fmt.Sprintf("%s:%d", host, port)
|
||||
}
|
||||
|
||||
func formatTimestamp(raw string) string {
|
||||
if raw == "" {
|
||||
return ""
|
||||
}
|
||||
for _, layout := range []string{
|
||||
"2006-01-02 15:04:05.000",
|
||||
"2006-01-02 15:04:05",
|
||||
time.RFC3339,
|
||||
"2006-01-02T15:04:05.000Z07:00",
|
||||
} {
|
||||
if t, err := time.Parse(layout, raw); err == nil {
|
||||
return t.In(kst()).Format("2006-01-02 15:04:05.000")
|
||||
}
|
||||
if t, err := time.ParseInLocation(layout, raw, kst()); err == nil {
|
||||
return t.In(kst()).Format("2006-01-02 15:04:05.000")
|
||||
}
|
||||
}
|
||||
return raw
|
||||
}
|
||||
221
internal/userprogram/ip_geoinfo.go
Normal file
221
internal/userprogram/ip_geoinfo.go
Normal file
@@ -0,0 +1,221 @@
|
||||
package userprogram
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
|
||||
"geoip-rest/internal/geo"
|
||||
)
|
||||
|
||||
func EnsureIPGeoInfoTable(ctx context.Context, conn *pgx.Conn, schema string) error {
|
||||
ddl := fmt.Sprintf(`
|
||||
CREATE TABLE IF NOT EXISTS %s.ip_geoinfo (
|
||||
id bigserial PRIMARY KEY,
|
||||
ip inet UNIQUE NOT NULL,
|
||||
country text,
|
||||
region text,
|
||||
city text,
|
||||
address text,
|
||||
latitude double precision,
|
||||
longitude double precision
|
||||
);`, pgx.Identifier{schema}.Sanitize())
|
||||
_, err := conn.Exec(ctx, ddl)
|
||||
return err
|
||||
}
|
||||
|
||||
const defaultSeedPath = "/initial_data/ip_geoinfo_seed_20251208.sql"
|
||||
|
||||
// SeedIPGeoInfoIfMissing applies the seed SQL when ip_geoinfo is absent.
|
||||
func SeedIPGeoInfoIfMissing(ctx context.Context, conn *pgx.Conn, schema string) error {
|
||||
exists, err := ipGeoInfoExists(ctx, conn, schema)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if exists {
|
||||
return nil
|
||||
}
|
||||
if _, err := os.Stat(defaultSeedPath); err == nil {
|
||||
if err := ExecuteSQLFile(ctx, conn, defaultSeedPath); err != nil {
|
||||
return fmt.Errorf("execute seed sql: %w", err)
|
||||
}
|
||||
}
|
||||
return EnsureIPGeoInfoTable(ctx, conn, schema)
|
||||
}
|
||||
|
||||
func ipGeoInfoExists(ctx context.Context, conn *pgx.Conn, schema string) (bool, error) {
|
||||
var exists bool
|
||||
err := conn.QueryRow(ctx, `
|
||||
SELECT EXISTS (
|
||||
SELECT 1 FROM information_schema.tables
|
||||
WHERE table_schema = $1 AND table_name = 'ip_geoinfo'
|
||||
);`, schema).Scan(&exists)
|
||||
return exists, err
|
||||
}
|
||||
|
||||
// ExportPublicIPs writes distinct login_public_ip values to a CSV file with header.
|
||||
func ExportPublicIPs(ctx context.Context, conn *pgx.Conn, schema, path string) error {
|
||||
rows, err := conn.Query(ctx, fmt.Sprintf(`
|
||||
SELECT DISTINCT login_public_ip
|
||||
FROM %s.user_program_info_replica
|
||||
WHERE login_public_ip IS NOT NULL AND login_public_ip <> ''
|
||||
ORDER BY login_public_ip;`, pgx.Identifier{schema}.Sanitize()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var ips []string
|
||||
for rows.Next() {
|
||||
var ip string
|
||||
if err := rows.Scan(&ip); err != nil {
|
||||
return err
|
||||
}
|
||||
ips = append(ips, ip)
|
||||
}
|
||||
if rows.Err() != nil {
|
||||
return rows.Err()
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
|
||||
return err
|
||||
}
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
if _, err := f.WriteString(`"login_public_ip"` + "\n"); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, ip := range ips {
|
||||
if _, err := f.WriteString(fmt.Sprintf(`"%s"`+"\n", ip)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GenerateIPGeoInfoSQL builds an upsert SQL file for IPs. If onlyNew is true, it skips
|
||||
// IPs already present in ip_geoinfo.
|
||||
func GenerateIPGeoInfoSQL(ctx context.Context, conn *pgx.Conn, schema string, resolver geo.Resolver, output string, onlyNew bool) (int, error) {
|
||||
query := fmt.Sprintf(`
|
||||
SELECT DISTINCT login_public_ip
|
||||
FROM %s.user_program_info_replica r
|
||||
WHERE login_public_ip IS NOT NULL AND login_public_ip <> ''`, pgx.Identifier{schema}.Sanitize())
|
||||
if onlyNew {
|
||||
query += fmt.Sprintf(`
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM %s.ip_geoinfo g WHERE g.ip = r.login_public_ip::inet
|
||||
)`, pgx.Identifier{schema}.Sanitize())
|
||||
}
|
||||
query += ";"
|
||||
|
||||
rows, err := conn.Query(ctx, query)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var ips []string
|
||||
for rows.Next() {
|
||||
var ip string
|
||||
if err := rows.Scan(&ip); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
ips = append(ips, ip)
|
||||
}
|
||||
if rows.Err() != nil {
|
||||
return 0, rows.Err()
|
||||
}
|
||||
if len(ips) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
sort.Strings(ips)
|
||||
|
||||
if err := os.MkdirAll(filepath.Dir(output), 0o755); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
f, err := os.Create(output)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
header := fmt.Sprintf("-- generated at %s KST\n", time.Now().In(kst()).Format(time.RFC3339))
|
||||
header += fmt.Sprintf("CREATE SCHEMA IF NOT EXISTS %s;\n", schemaIdent(schema))
|
||||
header += fmt.Sprintf(`CREATE TABLE IF NOT EXISTS %s.ip_geoinfo (
|
||||
id bigserial PRIMARY KEY,
|
||||
ip inet UNIQUE NOT NULL,
|
||||
country text,
|
||||
region text,
|
||||
city text,
|
||||
address text,
|
||||
latitude double precision,
|
||||
longitude double precision
|
||||
);`+"\n", schemaIdent(schema))
|
||||
if _, err := f.WriteString(header); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
count := 0
|
||||
for _, ip := range ips {
|
||||
loc, err := resolver.Lookup(ip)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
stmt := fmt.Sprintf(`INSERT INTO %s.ip_geoinfo (ip, country, region, city, address, latitude, longitude)
|
||||
VALUES ('%s', %s, %s, %s, %s, %f, %f)
|
||||
ON CONFLICT (ip) DO UPDATE SET
|
||||
country = EXCLUDED.country,
|
||||
region = EXCLUDED.region,
|
||||
city = EXCLUDED.city,
|
||||
address = EXCLUDED.address,
|
||||
latitude = EXCLUDED.latitude,
|
||||
longitude = EXCLUDED.longitude;
|
||||
`, schemaIdent(schema), toHostCIDR(ip), quote(loc.Country), quote(loc.Region), quote(loc.City), quote(loc.Address), loc.Latitude, loc.Longitude)
|
||||
if _, err := f.WriteString(stmt); err != nil {
|
||||
return count, err
|
||||
}
|
||||
count++
|
||||
}
|
||||
|
||||
return count, nil
|
||||
}
|
||||
|
||||
func ExecuteSQLFile(ctx context.Context, conn *pgx.Conn, path string) error {
|
||||
content, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = conn.Exec(ctx, string(content))
|
||||
return err
|
||||
}
|
||||
|
||||
func toHostCIDR(ipStr string) string {
|
||||
ip := net.ParseIP(ipStr)
|
||||
if ip == nil {
|
||||
return ""
|
||||
}
|
||||
if ip.To4() != nil {
|
||||
return ip.String() + "/32"
|
||||
}
|
||||
return ip.String() + "/128"
|
||||
}
|
||||
|
||||
func quote(s string) string {
|
||||
return fmt.Sprintf("'%s'", strings.ReplaceAll(s, "'", "''"))
|
||||
}
|
||||
|
||||
func schemaIdent(s string) string {
|
||||
return `"` + strings.ReplaceAll(s, `"`, `""`) + `"`
|
||||
}
|
||||
187
internal/userprogram/sync.go
Normal file
187
internal/userprogram/sync.go
Normal file
@@ -0,0 +1,187 @@
|
||||
package userprogram
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
|
||||
"geoip-rest/internal/geo"
|
||||
"geoip-rest/internal/importer"
|
||||
)
|
||||
|
||||
const defaultMMDBPath = "/initial_data/GeoLite2-City.mmdb"
|
||||
|
||||
type SyncConfig struct {
|
||||
MySQL MySQLConfig
|
||||
DatabaseURL string
|
||||
Backend Backend
|
||||
MMDBPath string
|
||||
LookupQuery string
|
||||
InitialCSV string
|
||||
UpdateDir string
|
||||
LogDir string
|
||||
Schema string
|
||||
Logger *log.Logger
|
||||
}
|
||||
|
||||
func (c *SyncConfig) defaults() {
|
||||
if c.InitialCSV == "" {
|
||||
c.InitialCSV = DefaultInitialCSV
|
||||
}
|
||||
if c.UpdateDir == "" {
|
||||
c.UpdateDir = DefaultUpdateDir
|
||||
}
|
||||
if c.LogDir == "" {
|
||||
c.LogDir = DefaultLogDir
|
||||
}
|
||||
if c.Schema == "" {
|
||||
c.Schema = DefaultSchema
|
||||
}
|
||||
if c.MMDBPath == "" {
|
||||
c.MMDBPath = defaultMMDBPath
|
||||
}
|
||||
if c.Logger == nil {
|
||||
c.Logger = log.Default()
|
||||
}
|
||||
}
|
||||
|
||||
// Sync ensures replica table exists and imports initial data, then dumps and imports
|
||||
// updates using the primary key high-water mark up to yesterday (KST).
|
||||
func Sync(ctx context.Context, cfg SyncConfig) error {
|
||||
cfg.defaults()
|
||||
|
||||
dumper, err := NewDumper(cfg.MySQL, cfg.UpdateDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("init dumper: %w", err)
|
||||
}
|
||||
defer dumper.Close()
|
||||
|
||||
conn, err := pgx.Connect(ctx, cfg.DatabaseURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("connect postgres: %w", err)
|
||||
}
|
||||
defer conn.Close(context.Background())
|
||||
|
||||
if err := importer.EnsureUserProgramReplica(ctx, conn, cfg.InitialCSV, cfg.Schema, cfg.LogDir); err != nil {
|
||||
return fmt.Errorf("ensure replica: %w", err)
|
||||
}
|
||||
|
||||
lastID, err := importer.LatestID(ctx, conn, cfg.Schema, importer.ReplicaTable)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read latest id: %w", err)
|
||||
}
|
||||
|
||||
endDate := yesterdayKST()
|
||||
upperID, err := dumper.MaxIDUntil(ctx, endDate)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read upstream max id: %w", err)
|
||||
}
|
||||
|
||||
if upperID <= lastID {
|
||||
cfg.Logger.Printf("no dump needed (last_id=%d upstream_max=%d)", lastID, upperID)
|
||||
return nil
|
||||
}
|
||||
|
||||
cfg.Logger.Printf("dumping ids (%d, %d] to %s", lastID, upperID, cfg.UpdateDir)
|
||||
csvPath, err := dumper.DumpRange(ctx, lastID, upperID, endDate)
|
||||
if err != nil {
|
||||
return fmt.Errorf("dump range: %w", err)
|
||||
}
|
||||
if csvPath == "" {
|
||||
cfg.Logger.Printf("no rows dumped (last_id=%d upstream_max=%d)", lastID, upperID)
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := importer.ImportUserProgramUpdates(ctx, conn, csvPath, cfg.Schema, cfg.LogDir); err != nil {
|
||||
return fmt.Errorf("import updates: %w", err)
|
||||
}
|
||||
|
||||
if err := ensureIPGeoInfo(ctx, cfg, conn); err != nil {
|
||||
cfg.Logger.Printf("ip_geoinfo update warning: %v", err)
|
||||
}
|
||||
|
||||
cfg.Logger.Printf("sync complete (last_id=%d -> %d)", lastID, upperID)
|
||||
|
||||
if err := verifyCounts(ctx, cfg, dumper, conn, upperID); err != nil {
|
||||
cfg.Logger.Printf("sync verification warning: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func toKST(t time.Time) time.Time {
|
||||
return t.In(kst())
|
||||
}
|
||||
|
||||
func verifyCounts(ctx context.Context, cfg SyncConfig, dumper *Dumper, conn *pgx.Conn, upperID int64) error {
|
||||
sourceCount, err := dumper.CountUpToID(ctx, upperID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("source count: %w", err)
|
||||
}
|
||||
targetCount, err := importer.CountUpToID(ctx, conn, cfg.Schema, importer.ReplicaTable, upperID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("target count: %w", err)
|
||||
}
|
||||
if targetCount != sourceCount {
|
||||
return fmt.Errorf("count mismatch up to id %d (source=%d target=%d)", upperID, sourceCount, targetCount)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ensureIPGeoInfo(ctx context.Context, cfg SyncConfig, conn *pgx.Conn) error {
|
||||
exists, err := ipGeoInfoExists(ctx, conn, cfg.Schema)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !exists {
|
||||
seedPath := filepath.Join("/initial_data", "ip_geoinfo_seed_20251208.sql")
|
||||
if _, err := os.Stat(seedPath); err == nil {
|
||||
if err := ExecuteSQLFile(ctx, conn, seedPath); err != nil {
|
||||
return fmt.Errorf("execute seed sql: %w", err)
|
||||
}
|
||||
exists = true
|
||||
}
|
||||
}
|
||||
|
||||
if err := EnsureIPGeoInfoTable(ctx, conn, cfg.Schema); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ts := time.Now().In(kst()).Format("20060102-150405")
|
||||
ipListPath := filepath.Join(cfg.UpdateDir, fmt.Sprintf("public_ip_list_%s.csv", ts))
|
||||
if err := ExportPublicIPs(ctx, conn, cfg.Schema, ipListPath); err != nil {
|
||||
return fmt.Errorf("export public ip list: %w", err)
|
||||
}
|
||||
|
||||
resolver, err := ResolveBackend(geo.Config{
|
||||
Backend: geo.Backend(cfg.Backend),
|
||||
MMDBPath: cfg.MMDBPath,
|
||||
DatabaseURL: cfg.DatabaseURL,
|
||||
LookupQuery: cfg.LookupQuery,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("init resolver for ip_geoinfo: %w", err)
|
||||
}
|
||||
defer resolver.Close()
|
||||
|
||||
sqlPath := filepath.Join(cfg.UpdateDir, fmt.Sprintf("ip_geoinfo_update-%s.sql", ts))
|
||||
count, err := GenerateIPGeoInfoSQL(ctx, conn, cfg.Schema, resolver, sqlPath, true)
|
||||
if err != nil {
|
||||
return fmt.Errorf("generate ip_geoinfo sql: %w", err)
|
||||
}
|
||||
if count == 0 {
|
||||
if !exists {
|
||||
return fmt.Errorf("seeded ip_geoinfo but no new IPs found for update")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if err := ExecuteSQLFile(ctx, conn, sqlPath); err != nil {
|
||||
return fmt.Errorf("execute ip_geoinfo sql: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
37
to-do.md
37
to-do.md
@@ -1,6 +1,6 @@
|
||||
# TODO 기록
|
||||
|
||||
- 업데이트 시각 (KST): 2025-12-05 17:01:28 KST
|
||||
- 업데이트 시각 (KST): 2025-12-09 19:28:55 KST
|
||||
|
||||
## 완료된 항목
|
||||
- [x] Go Fiber 기반 GeoIP API 구조 결정 및 엔트리포인트 구현 (`cmd/server`)
|
||||
@@ -10,7 +10,38 @@
|
||||
- [x] Dockerfile 빌더/런타임 이미지 1.25.5-trixie로 전환하고 불필요 패키지 제거
|
||||
- [x] README 작성 및 응답 샘플 추가
|
||||
- [x] resolver 단위 테스트 추가 (`internal/geo/resolver_test.go`)
|
||||
- [x] `user_program_info_replica` DDL/CSV 임포터 추가 (`id bigint`, 텍스트 컬럼, timestamp KST 파싱, bool 플래그) 완료: 2025-12-09 18:32 KST
|
||||
- [x] 초기/일간 CSV 디렉토리 기반 임포트 + 로그 파일 기록(`log/`), upsert 로직 업데이트 완료: 2025-12-09 19:06 KST
|
||||
- [x] Fiber 프로세스 내 cron 스케줄러 추가(전일 덤프 스크립트 실행 + update_data 적용, KST cron 지원) 완료: 2025-12-09 19:28 KST
|
||||
- [x] MySQL CLI 의존성 제거, Go 기반 덤퍼(`cmd/user_program_dump`) 추가 및 `scripts/dump_and_import.sh`에서 사용하도록 변경 완료: 2025-12-10 09:34 KST
|
||||
- [x] 스케줄러 토글 env(`USER_PROGRAM_CRON_ENABLE`) 추가, true일 때만 크론 구동하도록 변경 완료: 2025-12-10 09:45 KST
|
||||
- [x] 크론 표현식 env(`USER_PROGRAM_CRON`) 제거, 코드에 KST 00:05 고정 스케줄 적용 완료: 2025-12-10 09:56 KST
|
||||
- [x] bash 스크립트 의존 없이 Go CLI(`user-program-sync`)로 덤프+임포트 수행, 스케줄러가 해당 CLI를 직접 호출하도록 변경 완료: 2025-12-10 09:50 KST
|
||||
- [x] 초기 적재+백필+일일 업데이트를 Go 라이브러리(`internal/userprogram`)로 통합, `user-program-sync`가 초기 CSV 임포트 후 최신 일자까지 덤프/적재하도록 리팩토링 완료: 2025-12-10 10:03 KST
|
||||
- [x] 증분 기준을 created_at 날짜에서 PK(id) 기반으로 변경, 마지막 id 이후 어제까지의 최대 id까지 덤프/업서트하도록 Sync/Dump 경로 리팩토링 완료: 2025-12-10 10:20 KST
|
||||
- [x] 컨테이너 사용자 UID/GID를 빌드 시 지정 가능하도록 하고 볼륨 소유권을 맞춰 권한 오류 해결 (`APP_UID`/`APP_GID`, chown 적용) 완료: 2025-12-10 10:56 KST
|
||||
- [x] access log 파일 출력 + 10MB 롤링, 헤더 길이 1KB로 절단 및 프록시 IP 정보 포함 완료: 2025-12-10 12:20 KST
|
||||
- [x] `ip_geoinfo` 테이블 초기/증분 upsert 자동화: sync 완료 후 public_ip 리스트를 CSV로 내보내고 신규 IP만 GeoIP 조회해 SQL 생성·실행하도록 추가 완료: 2025-12-10 12:27 KST
|
||||
- [x] 컨테이너 기동 시 `user-program-import` 자동 실행하도록 compose 커맨드 수정 (USER_PROGRAM_IMPORT_ON_START 플래그) 완료: 2025-12-10 13:25 KST
|
||||
|
||||
## 진행 예정
|
||||
- [ ] `go mod tidy` 실행하여 `go.sum` 생성 및 의존성 고정
|
||||
- [ ] 추가 테스트 확장 (테이블 기반, 테스트용 mmdb 픽스처 사용)
|
||||
- [x] PostgreSQL 전용 Docker 이미지(또는 build 단계)에서 `maxminddb_fdw` 설치 후 `GeoLite2-City.mmdb` 볼륨을 `/data`로 마운트하는 `postgres` 서비스 추가 및 5432 외부 노출
|
||||
- [x] 초기화 SQL을 `/docker-entrypoint-initdb.d/`로 넣어 `CREATE EXTENSION maxminddb_fdw; SERVER maxminddb ...` 정의, 필요한 `FOREIGN TABLE`/`VIEW` 설계 (country/region/city/lat/lon/time_zone 등)
|
||||
- [x] FDW 기반 조회 최적화를 위한 `inet` 인자 함수/VIEW 설계(예: `SELECT * FROM city_location WHERE network >>= inet($1) ORDER BY masklen(network) DESC LIMIT 1`)
|
||||
- [x] 앱 구성 확장: `GEOIP_BACKEND=mmdb|postgres`, `DATABASE_URL` 등 env 추가, `internal/geo`에 Postgres resolver 구현 및 DI로 선택 연결, 시작 시 backend/DB 헬스 체크 로그
|
||||
- [ ] Postgres 컨테이너에 GeoLite mmdb 및 init SQL 디렉터리 마운트 추가 반영 후 compose.infra.yml/dev 실행 경로 검증
|
||||
- [x] docker-compose 단일 스택에서 db healthcheck 추가 후 api가 service_healthy 상태를 기다리도록 depends_on 조건 설정
|
||||
- [ ] Fiber 라우트에서 DB resolver와 파일 resolver가 동일한 응답 스키마를 반환하도록 리스폰스 변환/에러 핸들링 정리
|
||||
- [ ] 테스트: 파일 기반은 그대로 유지, DB resolver용 통합 테스트(테스트 컨테이너/compose) 및 테이블 기반 케이스 추가; 라이선스 문제 없이 쓸 수 있는 mmdb 픽스처 고려
|
||||
- [ ] 문서화: README에 Postgres/FDW 실행 방법, 샘플 쿼리, 보안/포트 노출 주의사항, mmdb 교체 절차 추가
|
||||
- [x] `go mod tidy` 재실행으로 의존성 정리 및 필요한 DB 드라이버 추가
|
||||
- [ ] `maxminddb_fdw` 제거 후 mmdb -> Postgres 적재 파이프라인 설계: mmdb SHA256을 테이블에 기록해 변경 시에만 스테이징 로드+인덱스 생성 후 트랜잭션 rename으로 교체, 변경 없으면 스킵
|
||||
- [ ] `maxminddb_fdw` 제거 후 mmdb -> Postgres 적재 파이프라인 설계: Go 기반 변환기로 스테이징 테이블에 로드하고 트랜잭션 rename으로 다운타임 없이 교체, 업데이트 주기/운영 방법 정의
|
||||
- [ ] 시행착오 기록: maxminddb-golang v1.11.0은 `reader.Networks()`가 에러를 반환하지 않는 단일 반환 함수임. `reader.Networks(0)`/다중 반환 처리 금지 (재시도하지 말 것)
|
||||
- [ ] compose에서 loader 단독 서비스 제거, api entrypoint에서 loader 실행 → post-start 훅으로 문서화 및 대기 전략 검토
|
||||
- [ ] Postgres 초기 설정 튜닝: `max_wal_size`를 4GB로 확대해 초기 bulk load 시 checkpoint 난발 방지 (deploy/postgres/init/01_tuning.sql 반영)
|
||||
- [ ] compose에서 api가 loader 완료 대기 때문에 기동 지연됨 → loader `service_started` 조건으로 완화, 향후 API 기동/데이터 적재 병행 여부 문서화 필요
|
||||
- [ ] MySQL `user_program_info` 증분 백업 설계: Postgres 백업 테이블 DDL(동일 컬럼, PK=id, `created_at` 인덱스), `login_public_ip varchar(45)`, UTC 기준
|
||||
- [ ] `sync_meta(last_synced_at)` 테이블 작성 및 워터마크 쿼리 정의: `created_at > last_synced_at - interval '5 minutes'` + `max(created_at)`로 메타 갱신
|
||||
- [ ] 증분 적재 파이프라인 구현: MySQL pull → Postgres upsert(ON CONFLICT id) 배치 처리, 빈 배치 처리/타임존 변환/정합성 로그
|
||||
- [ ] 운영 트리거 설계: 15분 cron 기본, API 수동 트리거(health 포함) 여부 결정, 실패 재시도 및 알림 연동
|
||||
|
||||
Reference in New Issue
Block a user